Test Report: Docker_Linux_crio_arm64 21832

                    
                      e7c87104757589f66628ccdf942f4e049b607564:2025-11-01:42155
                    
                

Test fail (36/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.3
35 TestAddons/parallel/Registry 15.22
36 TestAddons/parallel/RegistryCreds 0.49
37 TestAddons/parallel/Ingress 143.28
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.45
41 TestAddons/parallel/CSI 40.25
42 TestAddons/parallel/Headlamp 3.69
43 TestAddons/parallel/CloudSpanner 5.39
44 TestAddons/parallel/LocalPath 8.42
45 TestAddons/parallel/NvidiaDevicePlugin 5.28
46 TestAddons/parallel/Yakd 6.27
97 TestFunctional/parallel/ServiceCmdConnect 603.66
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.15
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.17
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.37
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.26
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
146 TestFunctional/parallel/ServiceCmd/DeployApp 600.95
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
153 TestFunctional/parallel/ServiceCmd/Format 0.54
154 TestFunctional/parallel/ServiceCmd/URL 0.53
191 TestJSONOutput/pause/Command 1.77
197 TestJSONOutput/unpause/Command 1.82
281 TestPause/serial/Pause 6.71
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.59
303 TestStartStop/group/old-k8s-version/serial/Pause 6.9
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.5
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.72
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.52
327 TestStartStop/group/embed-certs/serial/Pause 8.21
331 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.89
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.35
343 TestStartStop/group/newest-cni/serial/Pause 9.08
348 TestStartStop/group/no-preload/serial/Pause 6.61
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable volcano --alsologtostderr -v=1: exit status 11 (294.690707ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:50:02.936101  300980 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:50:02.936985  300980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:02.937027  300980 out.go:374] Setting ErrFile to fd 2...
	I1101 09:50:02.937045  300980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:02.937363  300980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:50:02.937720  300980 mustload.go:66] Loading cluster: addons-714840
	I1101 09:50:02.938147  300980 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:02.938187  300980 addons.go:607] checking whether the cluster is paused
	I1101 09:50:02.938335  300980 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:02.938368  300980 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:50:02.938868  300980 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:50:02.958439  300980 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:02.958506  300980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:50:02.975511  300980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:50:03.087948  300980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:03.088044  300980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:03.118020  300980 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:50:03.118045  300980 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:50:03.118050  300980 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:50:03.118054  300980 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:50:03.118062  300980 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:50:03.118066  300980 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:50:03.118070  300980 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:50:03.118073  300980 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:50:03.118076  300980 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:50:03.118082  300980 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:50:03.118086  300980 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:50:03.118093  300980 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:50:03.118101  300980 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:50:03.118105  300980 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:50:03.118109  300980 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:50:03.118117  300980 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:50:03.118120  300980 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:50:03.118124  300980 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:50:03.118127  300980 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:50:03.118130  300980 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:50:03.118136  300980 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:50:03.118142  300980 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:50:03.118145  300980 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:50:03.118148  300980 cri.go:89] found id: ""
	I1101 09:50:03.118199  300980 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:50:03.133807  300980 out.go:203] 
	W1101 09:50:03.136753  300980 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:50:03.136780  300980 out.go:285] * 
	* 
	W1101 09:50:03.144900  300980 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:50:03.147793  300980 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.364119ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003115627s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003147151s
addons_test.go:392: (dbg) Run:  kubectl --context addons-714840 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-714840 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-714840 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.643886293s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable registry --alsologtostderr -v=1: exit status 11 (297.510906ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:50:28.442646  301534 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:50:28.443485  301534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:28.443500  301534 out.go:374] Setting ErrFile to fd 2...
	I1101 09:50:28.443506  301534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:28.443778  301534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:50:28.444083  301534 mustload.go:66] Loading cluster: addons-714840
	I1101 09:50:28.444465  301534 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:28.444479  301534 addons.go:607] checking whether the cluster is paused
	I1101 09:50:28.444582  301534 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:28.444592  301534 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:50:28.445127  301534 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:50:28.461042  301534 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:28.461106  301534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:50:28.492893  301534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:50:28.599752  301534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:28.599848  301534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:28.632214  301534 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:50:28.632238  301534 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:50:28.632243  301534 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:50:28.632247  301534 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:50:28.632251  301534 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:50:28.632255  301534 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:50:28.632258  301534 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:50:28.632262  301534 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:50:28.632265  301534 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:50:28.632274  301534 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:50:28.632277  301534 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:50:28.632281  301534 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:50:28.632284  301534 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:50:28.632287  301534 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:50:28.632291  301534 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:50:28.632299  301534 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:50:28.632310  301534 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:50:28.632314  301534 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:50:28.632318  301534 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:50:28.632320  301534 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:50:28.632325  301534 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:50:28.632328  301534 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:50:28.632331  301534 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:50:28.632334  301534 cri.go:89] found id: ""
	I1101 09:50:28.632386  301534 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:50:28.650004  301534 out.go:203] 
	W1101 09:50:28.653285  301534 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:50:28.653317  301534 out.go:285] * 
	* 
	W1101 09:50:28.658295  301534 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:50:28.661271  301534 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.22s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.065106ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-714840
addons_test.go:332: (dbg) Run:  kubectl --context addons-714840 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (260.402864ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:51:14.591040  303529 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:51:14.591896  303529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:51:14.591938  303529 out.go:374] Setting ErrFile to fd 2...
	I1101 09:51:14.591960  303529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:51:14.592244  303529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:51:14.592563  303529 mustload.go:66] Loading cluster: addons-714840
	I1101 09:51:14.593072  303529 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:51:14.593116  303529 addons.go:607] checking whether the cluster is paused
	I1101 09:51:14.593249  303529 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:51:14.593284  303529 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:51:14.593817  303529 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:51:14.612219  303529 ssh_runner.go:195] Run: systemctl --version
	I1101 09:51:14.612272  303529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:51:14.630492  303529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:51:14.735846  303529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:51:14.736005  303529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:51:14.766383  303529 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:51:14.766405  303529 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:51:14.766411  303529 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:51:14.766415  303529 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:51:14.766420  303529 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:51:14.766423  303529 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:51:14.766427  303529 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:51:14.766430  303529 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:51:14.766434  303529 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:51:14.766444  303529 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:51:14.766447  303529 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:51:14.766451  303529 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:51:14.766455  303529 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:51:14.766459  303529 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:51:14.766462  303529 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:51:14.766472  303529 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:51:14.766482  303529 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:51:14.766490  303529 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:51:14.766493  303529 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:51:14.766498  303529 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:51:14.766503  303529 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:51:14.766506  303529 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:51:14.766509  303529 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:51:14.766512  303529 cri.go:89] found id: ""
	I1101 09:51:14.766576  303529 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:51:14.781932  303529 out.go:203] 
	W1101 09:51:14.784747  303529 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:51:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:51:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:51:14.784772  303529 out.go:285] * 
	* 
	W1101 09:51:14.789810  303529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:51:14.792635  303529 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-714840 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-714840 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-714840 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [18abab98-8d9d-400b-b992-9764dfb99569] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [18abab98-8d9d-400b-b992-9764dfb99569] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003387569s
I1101 09:50:58.489235  294288 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.530658103s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-714840 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-714840
helpers_test.go:243: (dbg) docker inspect addons-714840:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9",
	        "Created": "2025-11-01T09:47:37.747589113Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295447,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:47:37.814855295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9/hosts",
	        "LogPath": "/var/lib/docker/containers/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9-json.log",
	        "Name": "/addons-714840",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-714840:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-714840",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9",
	                "LowerDir": "/var/lib/docker/overlay2/9c5c3fba1b0d3deba2fb576c1d6bb043473ad67ea28e7e64bc49e52c6f90d1bd-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c5c3fba1b0d3deba2fb576c1d6bb043473ad67ea28e7e64bc49e52c6f90d1bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c5c3fba1b0d3deba2fb576c1d6bb043473ad67ea28e7e64bc49e52c6f90d1bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c5c3fba1b0d3deba2fb576c1d6bb043473ad67ea28e7e64bc49e52c6f90d1bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-714840",
	                "Source": "/var/lib/docker/volumes/addons-714840/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-714840",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-714840",
	                "name.minikube.sigs.k8s.io": "addons-714840",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "90e19efb3514b7358870171644e8ede39b8886462b9c8dbc3f7fdc64179a3377",
	            "SandboxKey": "/var/run/docker/netns/90e19efb3514",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-714840": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:08:94:61:e5:35",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2ec2e4bdf07ebc49b6f3f28ea34af4ab99e24d4d2a098b7e81e52c59c2b45c0b",
	                    "EndpointID": "eec3bf0f47089c814107edfacd628d11abf1c24a2434396378e83c340232aa69",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-714840",
	                        "c1f1da656a11"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-714840 -n addons-714840
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-714840 logs -n 25: (1.502195942s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-896540                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-896540 │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ start   │ --download-only -p binary-mirror-569786 --alsologtostderr --binary-mirror http://127.0.0.1:45357 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-569786   │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │                     │
	│ delete  │ -p binary-mirror-569786                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-569786   │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ addons  │ disable dashboard -p addons-714840                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │                     │
	│ addons  │ enable dashboard -p addons-714840                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │                     │
	│ start   │ -p addons-714840 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:50 UTC │
	│ addons  │ addons-714840 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ ip      │ addons-714840 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │ 01 Nov 25 09:50 UTC │
	│ addons  │ addons-714840 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ ssh     │ addons-714840 ssh cat /opt/local-path-provisioner/pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │ 01 Nov 25 09:50 UTC │
	│ addons  │ addons-714840 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ enable headlamp -p addons-714840 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ ssh     │ addons-714840 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:51 UTC │                     │
	│ addons  │ addons-714840 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:51 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-714840                                                                                                                                                                                                                                                                                                                                                                                           │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:51 UTC │ 01 Nov 25 09:51 UTC │
	│ addons  │ addons-714840 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:51 UTC │                     │
	│ ip      │ addons-714840 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:47:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:47:12.843619  295049 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:47:12.843986  295049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:47:12.844028  295049 out.go:374] Setting ErrFile to fd 2...
	I1101 09:47:12.844049  295049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:47:12.844355  295049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:47:12.844964  295049 out.go:368] Setting JSON to false
	I1101 09:47:12.845825  295049 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5385,"bootTime":1761985048,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 09:47:12.845936  295049 start.go:143] virtualization:  
	I1101 09:47:12.849391  295049 out.go:179] * [addons-714840] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:47:12.852405  295049 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:47:12.852488  295049 notify.go:221] Checking for updates...
	I1101 09:47:12.858191  295049 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:47:12.861252  295049 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 09:47:12.864181  295049 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 09:47:12.867035  295049 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:47:12.870023  295049 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:47:12.873330  295049 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:47:12.895609  295049 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:47:12.895733  295049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:47:12.956456  295049 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 09:47:12.947501916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:47:12.956564  295049 docker.go:319] overlay module found
	I1101 09:47:12.959668  295049 out.go:179] * Using the docker driver based on user configuration
	I1101 09:47:12.962515  295049 start.go:309] selected driver: docker
	I1101 09:47:12.962535  295049 start.go:930] validating driver "docker" against <nil>
	I1101 09:47:12.962561  295049 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:47:12.963306  295049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:47:13.019029  295049 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 09:47:13.009972034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:47:13.019199  295049 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:47:13.019440  295049 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:47:13.022344  295049 out.go:179] * Using Docker driver with root privileges
	I1101 09:47:13.025208  295049 cni.go:84] Creating CNI manager for ""
	I1101 09:47:13.025281  295049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:47:13.025290  295049 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:47:13.025371  295049 start.go:353] cluster config:
	{Name:addons-714840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-714840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 09:47:13.028458  295049 out.go:179] * Starting "addons-714840" primary control-plane node in "addons-714840" cluster
	I1101 09:47:13.031254  295049 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:47:13.034136  295049 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:47:13.037013  295049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:47:13.037076  295049 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:47:13.037090  295049 cache.go:59] Caching tarball of preloaded images
	I1101 09:47:13.037104  295049 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:47:13.037184  295049 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:47:13.037194  295049 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:47:13.037534  295049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/config.json ...
	I1101 09:47:13.037554  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/config.json: {Name:mked4fc3681e07235fb3e32952c51287c293d99b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:13.053141  295049 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:47:13.053271  295049 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:47:13.053296  295049 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 09:47:13.053303  295049 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 09:47:13.053315  295049 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 09:47:13.053321  295049 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 09:47:30.907072  295049 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 09:47:30.907112  295049 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:47:30.907143  295049 start.go:360] acquireMachinesLock for addons-714840: {Name:mkf6ac0e8c3fba79ae7fc6678b78aa6e902dfc1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:47:30.907268  295049 start.go:364] duration metric: took 99.16µs to acquireMachinesLock for "addons-714840"
	I1101 09:47:30.907300  295049 start.go:93] Provisioning new machine with config: &{Name:addons-714840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-714840 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:47:30.907390  295049 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:47:30.911013  295049 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 09:47:30.911267  295049 start.go:159] libmachine.API.Create for "addons-714840" (driver="docker")
	I1101 09:47:30.911305  295049 client.go:173] LocalClient.Create starting
	I1101 09:47:30.911437  295049 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem
	I1101 09:47:30.952863  295049 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem
	I1101 09:47:31.018704  295049 cli_runner.go:164] Run: docker network inspect addons-714840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:47:31.035218  295049 cli_runner.go:211] docker network inspect addons-714840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:47:31.035316  295049 network_create.go:284] running [docker network inspect addons-714840] to gather additional debugging logs...
	I1101 09:47:31.035340  295049 cli_runner.go:164] Run: docker network inspect addons-714840
	W1101 09:47:31.050814  295049 cli_runner.go:211] docker network inspect addons-714840 returned with exit code 1
	I1101 09:47:31.050847  295049 network_create.go:287] error running [docker network inspect addons-714840]: docker network inspect addons-714840: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-714840 not found
	I1101 09:47:31.050863  295049 network_create.go:289] output of [docker network inspect addons-714840]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-714840 not found
	
	** /stderr **
	I1101 09:47:31.051026  295049 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:47:31.069387  295049 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1f510}
	I1101 09:47:31.069433  295049 network_create.go:124] attempt to create docker network addons-714840 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 09:47:31.069510  295049 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-714840 addons-714840
	I1101 09:47:31.131436  295049 network_create.go:108] docker network addons-714840 192.168.49.0/24 created
	I1101 09:47:31.131473  295049 kic.go:121] calculated static IP "192.168.49.2" for the "addons-714840" container
	I1101 09:47:31.131558  295049 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:47:31.148143  295049 cli_runner.go:164] Run: docker volume create addons-714840 --label name.minikube.sigs.k8s.io=addons-714840 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:47:31.166811  295049 oci.go:103] Successfully created a docker volume addons-714840
	I1101 09:47:31.166895  295049 cli_runner.go:164] Run: docker run --rm --name addons-714840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-714840 --entrypoint /usr/bin/test -v addons-714840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:47:33.253677  295049 cli_runner.go:217] Completed: docker run --rm --name addons-714840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-714840 --entrypoint /usr/bin/test -v addons-714840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.086732552s)
	I1101 09:47:33.253704  295049 oci.go:107] Successfully prepared a docker volume addons-714840
	I1101 09:47:33.253742  295049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:47:33.253760  295049 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:47:33.253826  295049 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-714840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:47:37.681114  295049 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-714840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.427253897s)
	I1101 09:47:37.681145  295049 kic.go:203] duration metric: took 4.427381669s to extract preloaded images to volume ...
	W1101 09:47:37.681310  295049 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:47:37.681429  295049 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:47:37.732870  295049 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-714840 --name addons-714840 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-714840 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-714840 --network addons-714840 --ip 192.168.49.2 --volume addons-714840:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:47:38.072459  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Running}}
	I1101 09:47:38.098960  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:47:38.124360  295049 cli_runner.go:164] Run: docker exec addons-714840 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:47:38.178949  295049 oci.go:144] the created container "addons-714840" has a running status.
	I1101 09:47:38.178980  295049 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa...
	I1101 09:47:38.328327  295049 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:47:38.354866  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:47:38.382112  295049 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:47:38.382139  295049 kic_runner.go:114] Args: [docker exec --privileged addons-714840 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:47:38.441756  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:47:38.462688  295049 machine.go:94] provisionDockerMachine start ...
	I1101 09:47:38.462797  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:38.491698  295049 main.go:143] libmachine: Using SSH client type: native
	I1101 09:47:38.492313  295049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1101 09:47:38.492334  295049 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:47:38.493125  295049 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 09:47:41.640599  295049 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-714840
	
	I1101 09:47:41.640625  295049 ubuntu.go:182] provisioning hostname "addons-714840"
	I1101 09:47:41.640696  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:41.657616  295049 main.go:143] libmachine: Using SSH client type: native
	I1101 09:47:41.657930  295049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1101 09:47:41.657948  295049 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-714840 && echo "addons-714840" | sudo tee /etc/hostname
	I1101 09:47:41.814474  295049 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-714840
	
	I1101 09:47:41.814575  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:41.831973  295049 main.go:143] libmachine: Using SSH client type: native
	I1101 09:47:41.832293  295049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1101 09:47:41.832315  295049 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-714840' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-714840/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-714840' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:47:41.981105  295049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:47:41.981129  295049 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 09:47:41.981162  295049 ubuntu.go:190] setting up certificates
	I1101 09:47:41.981171  295049 provision.go:84] configureAuth start
	I1101 09:47:41.981232  295049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-714840
	I1101 09:47:41.998738  295049 provision.go:143] copyHostCerts
	I1101 09:47:41.998842  295049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 09:47:41.999005  295049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 09:47:41.999068  295049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 09:47:41.999116  295049 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.addons-714840 san=[127.0.0.1 192.168.49.2 addons-714840 localhost minikube]
	I1101 09:47:42.358004  295049 provision.go:177] copyRemoteCerts
	I1101 09:47:42.358077  295049 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:47:42.358128  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:42.375834  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:47:42.480741  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:47:42.498439  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:47:42.516375  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:47:42.533368  295049 provision.go:87] duration metric: took 552.183791ms to configureAuth
	I1101 09:47:42.533392  295049 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:47:42.533592  295049 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:47:42.533691  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:42.550392  295049 main.go:143] libmachine: Using SSH client type: native
	I1101 09:47:42.550694  295049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1101 09:47:42.550713  295049 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:47:42.803021  295049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:47:42.803041  295049 machine.go:97] duration metric: took 4.340330275s to provisionDockerMachine
	I1101 09:47:42.803059  295049 client.go:176] duration metric: took 11.891741885s to LocalClient.Create
	I1101 09:47:42.803074  295049 start.go:167] duration metric: took 11.891808668s to libmachine.API.Create "addons-714840"
	I1101 09:47:42.803081  295049 start.go:293] postStartSetup for "addons-714840" (driver="docker")
	I1101 09:47:42.803091  295049 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:47:42.803166  295049 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:47:42.803210  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:42.822660  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:47:42.929263  295049 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:47:42.932787  295049 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:47:42.932814  295049 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:47:42.932844  295049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 09:47:42.932942  295049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 09:47:42.932972  295049 start.go:296] duration metric: took 129.88523ms for postStartSetup
	I1101 09:47:42.933293  295049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-714840
	I1101 09:47:42.950115  295049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/config.json ...
	I1101 09:47:42.950418  295049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:47:42.950466  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:42.973178  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:47:43.074196  295049 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:47:43.079411  295049 start.go:128] duration metric: took 12.172006109s to createHost
	I1101 09:47:43.079486  295049 start.go:83] releasing machines lock for "addons-714840", held for 12.17220273s
	I1101 09:47:43.079585  295049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-714840
	I1101 09:47:43.096394  295049 ssh_runner.go:195] Run: cat /version.json
	I1101 09:47:43.096453  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:43.096725  295049 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:47:43.096777  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:43.115693  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:47:43.126664  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:47:43.310260  295049 ssh_runner.go:195] Run: systemctl --version
	I1101 09:47:43.316791  295049 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:47:43.352328  295049 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:47:43.356845  295049 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:47:43.356916  295049 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:47:43.386072  295049 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:47:43.386094  295049 start.go:496] detecting cgroup driver to use...
	I1101 09:47:43.386128  295049 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:47:43.386196  295049 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:47:43.402430  295049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:47:43.414931  295049 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:47:43.415013  295049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:47:43.432744  295049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:47:43.452421  295049 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:47:43.561767  295049 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:47:43.677294  295049 docker.go:234] disabling docker service ...
	I1101 09:47:43.677400  295049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:47:43.698153  295049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:47:43.711540  295049 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:47:43.824402  295049 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:47:43.951039  295049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:47:43.964503  295049 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:47:43.978637  295049 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:47:43.978733  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:43.987800  295049 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:47:43.987904  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.004498  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.014302  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.023697  295049 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:47:44.032018  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.041315  295049 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.055402  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.065621  295049 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:47:44.073724  295049 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:47:44.081543  295049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:47:44.194143  295049 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:47:44.316208  295049 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:47:44.316294  295049 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:47:44.320708  295049 start.go:564] Will wait 60s for crictl version
	I1101 09:47:44.320769  295049 ssh_runner.go:195] Run: which crictl
	I1101 09:47:44.324613  295049 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:47:44.351022  295049 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:47:44.351129  295049 ssh_runner.go:195] Run: crio --version
	I1101 09:47:44.379256  295049 ssh_runner.go:195] Run: crio --version
	I1101 09:47:44.410704  295049 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:47:44.413478  295049 cli_runner.go:164] Run: docker network inspect addons-714840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:47:44.429137  295049 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:47:44.432885  295049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:47:44.442394  295049 kubeadm.go:884] updating cluster {Name:addons-714840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-714840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:47:44.442503  295049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:47:44.442565  295049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:47:44.473799  295049 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:47:44.473827  295049 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:47:44.473883  295049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:47:44.500832  295049 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:47:44.500855  295049 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:47:44.500864  295049 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:47:44.500965  295049 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-714840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-714840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:47:44.501058  295049 ssh_runner.go:195] Run: crio config
	I1101 09:47:44.571500  295049 cni.go:84] Creating CNI manager for ""
	I1101 09:47:44.571543  295049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:47:44.571569  295049 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:47:44.571596  295049 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-714840 NodeName:addons-714840 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:47:44.571729  295049 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-714840"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:47:44.571801  295049 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:47:44.580050  295049 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:47:44.580165  295049 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:47:44.587602  295049 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:47:44.601075  295049 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:47:44.615318  295049 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1101 09:47:44.627927  295049 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:47:44.631492  295049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:47:44.641114  295049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:47:44.754117  295049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:47:44.769431  295049 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840 for IP: 192.168.49.2
	I1101 09:47:44.769501  295049 certs.go:195] generating shared ca certs ...
	I1101 09:47:44.769532  295049 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:44.769715  295049 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 09:47:45.855651  295049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt ...
	I1101 09:47:45.855691  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt: {Name:mk4cf6468ef14d02cbd92410cd4782247383e44b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:45.855900  295049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key ...
	I1101 09:47:45.855915  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key: {Name:mkbb72774e975f12896558de8f15660fe435c737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:45.856001  295049 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 09:47:46.438280  295049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt ...
	I1101 09:47:46.438312  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt: {Name:mk75fe7abee7e2bf689341d7fc63412ff1c56ad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:46.438488  295049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key ...
	I1101 09:47:46.438503  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key: {Name:mk0d554276ebbdf56caa33fbbdc37d214891a71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:46.438573  295049 certs.go:257] generating profile certs ...
	I1101 09:47:46.438638  295049 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.key
	I1101 09:47:46.438655  295049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt with IP's: []
	I1101 09:47:46.518500  295049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt ...
	I1101 09:47:46.518540  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: {Name:mk008a7fb412a8f7e0c037aa79a6e080994e63fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:46.518715  295049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.key ...
	I1101 09:47:46.518728  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.key: {Name:mkfbf8eb870384e4f6262a0b3a26653a945b8813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:46.518810  295049 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key.02841626
	I1101 09:47:46.518832  295049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt.02841626 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 09:47:47.164193  295049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt.02841626 ...
	I1101 09:47:47.164222  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt.02841626: {Name:mkbc2926a9f0443507812bd0cf620bed953ae434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:47.164396  295049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key.02841626 ...
	I1101 09:47:47.164410  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key.02841626: {Name:mk80c643fc96b5dd18d1f8a9eb5979373c38a755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:47.164494  295049 certs.go:382] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt.02841626 -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt
	I1101 09:47:47.164572  295049 certs.go:386] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key.02841626 -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key
	I1101 09:47:47.164623  295049 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.key
	I1101 09:47:47.164647  295049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.crt with IP's: []
	I1101 09:47:47.532521  295049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.crt ...
	I1101 09:47:47.532551  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.crt: {Name:mkf740531a1e7849e21aa37a19c12549fd5957b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:47.533333  295049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.key ...
	I1101 09:47:47.533357  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.key: {Name:mk84bc1536779125bb5db632c9430f67362944bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:47.533571  295049 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:47:47.533617  295049 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:47:47.533650  295049 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:47:47.533709  295049 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 09:47:47.534266  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:47:47.552660  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:47:47.571578  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:47:47.589590  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:47:47.609794  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:47:47.628330  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:47:47.646513  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:47:47.664295  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:47:47.682097  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:47:47.700683  295049 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:47:47.713668  295049 ssh_runner.go:195] Run: openssl version
	I1101 09:47:47.720004  295049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:47:47.728521  295049 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:47:47.732485  295049 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:47:47.732578  295049 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:47:47.773845  295049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:47:47.782710  295049 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:47:47.786446  295049 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:47:47.786517  295049 kubeadm.go:401] StartCluster: {Name:addons-714840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-714840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:47:47.786613  295049 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:47:47.786690  295049 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:47:47.814891  295049 cri.go:89] found id: ""
	I1101 09:47:47.815028  295049 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:47:47.823055  295049 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:47:47.831265  295049 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:47:47.831386  295049 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:47:47.839816  295049 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:47:47.839838  295049 kubeadm.go:158] found existing configuration files:
	
	I1101 09:47:47.839914  295049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:47:47.847920  295049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:47:47.847985  295049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:47:47.855765  295049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:47:47.863715  295049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:47:47.863833  295049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:47:47.871351  295049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:47:47.879379  295049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:47:47.879465  295049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:47:47.886918  295049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:47:47.897236  295049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:47:47.897306  295049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:47:47.906040  295049 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:47:47.961969  295049 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:47:47.962034  295049 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:47:47.986338  295049 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:47:47.986449  295049 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:47:47.986512  295049 kubeadm.go:319] OS: Linux
	I1101 09:47:47.986584  295049 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:47:47.986658  295049 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:47:47.986750  295049 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:47:47.986841  295049 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:47:47.986919  295049 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:47:47.987021  295049 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:47:47.987091  295049 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:47:47.987160  295049 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:47:47.987242  295049 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:47:48.066454  295049 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:47:48.066613  295049 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:47:48.066744  295049 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:47:48.077019  295049 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:47:48.080236  295049 out.go:252]   - Generating certificates and keys ...
	I1101 09:47:48.080426  295049 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:47:48.080521  295049 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:47:48.736193  295049 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:47:49.011011  295049 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:47:49.603343  295049 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:47:49.673166  295049 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:47:49.739268  295049 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:47:49.739640  295049 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-714840 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:47:50.696305  295049 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:47:50.696668  295049 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-714840 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:47:51.329965  295049 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:47:51.647035  295049 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:47:52.274847  295049 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:47:52.275237  295049 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:47:52.698810  295049 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:47:52.820002  295049 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:47:53.883978  295049 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:47:54.947925  295049 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:47:55.190971  295049 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:47:55.191515  295049 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:47:55.194217  295049 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:47:55.197599  295049 out.go:252]   - Booting up control plane ...
	I1101 09:47:55.197732  295049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:47:55.198170  295049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:47:55.199615  295049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:47:55.217423  295049 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:47:55.217769  295049 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:47:55.226601  295049 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:47:55.227302  295049 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:47:55.227600  295049 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:47:55.379132  295049 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:47:55.379258  295049 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:47:56.379897  295049 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000920948s
	I1101 09:47:56.383513  295049 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:47:56.383622  295049 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 09:47:56.383716  295049 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:47:56.383797  295049 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:47:58.905652  295049 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.521514249s
	I1101 09:48:01.646009  295049 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.262539265s
	I1101 09:48:03.385245  295049 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001425626s
	I1101 09:48:03.405594  295049 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:48:03.419942  295049 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:48:03.434806  295049 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:48:03.435072  295049 kubeadm.go:319] [mark-control-plane] Marking the node addons-714840 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:48:03.453110  295049 kubeadm.go:319] [bootstrap-token] Using token: 4hiyw7.npwciemn6akdakal
	I1101 09:48:03.458027  295049 out.go:252]   - Configuring RBAC rules ...
	I1101 09:48:03.458155  295049 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:48:03.464703  295049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:48:03.475357  295049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:48:03.480681  295049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:48:03.489354  295049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:48:03.494012  295049 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:48:03.794316  295049 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:48:04.221962  295049 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:48:04.791512  295049 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:48:04.792686  295049 kubeadm.go:319] 
	I1101 09:48:04.792793  295049 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:48:04.792819  295049 kubeadm.go:319] 
	I1101 09:48:04.792902  295049 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:48:04.792912  295049 kubeadm.go:319] 
	I1101 09:48:04.792979  295049 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:48:04.793052  295049 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:48:04.793110  295049 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:48:04.793119  295049 kubeadm.go:319] 
	I1101 09:48:04.793175  295049 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:48:04.793184  295049 kubeadm.go:319] 
	I1101 09:48:04.793234  295049 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:48:04.793242  295049 kubeadm.go:319] 
	I1101 09:48:04.793296  295049 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:48:04.793378  295049 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:48:04.793453  295049 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:48:04.793461  295049 kubeadm.go:319] 
	I1101 09:48:04.793549  295049 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:48:04.793639  295049 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:48:04.793665  295049 kubeadm.go:319] 
	I1101 09:48:04.793759  295049 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4hiyw7.npwciemn6akdakal \
	I1101 09:48:04.793874  295049 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 \
	I1101 09:48:04.793904  295049 kubeadm.go:319] 	--control-plane 
	I1101 09:48:04.793917  295049 kubeadm.go:319] 
	I1101 09:48:04.794006  295049 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:48:04.794016  295049 kubeadm.go:319] 
	I1101 09:48:04.794101  295049 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4hiyw7.npwciemn6akdakal \
	I1101 09:48:04.794212  295049 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 
	I1101 09:48:04.797787  295049 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 09:48:04.798041  295049 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 09:48:04.798197  295049 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:48:04.798234  295049 cni.go:84] Creating CNI manager for ""
	I1101 09:48:04.798254  295049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:48:04.801452  295049 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:48:04.805266  295049 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:48:04.809243  295049 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:48:04.809265  295049 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:48:04.822642  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:48:05.096639  295049 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:48:05.096734  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:05.096783  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-714840 minikube.k8s.io/updated_at=2025_11_01T09_48_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=addons-714840 minikube.k8s.io/primary=true
	I1101 09:48:05.228861  295049 ops.go:34] apiserver oom_adj: -16
	I1101 09:48:05.228992  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:05.729243  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:06.229066  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:06.729619  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:07.230028  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:07.729086  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:08.230112  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:08.729778  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:08.825841  295049 kubeadm.go:1114] duration metric: took 3.729158989s to wait for elevateKubeSystemPrivileges
	I1101 09:48:08.825873  295049 kubeadm.go:403] duration metric: took 21.039379784s to StartCluster
	I1101 09:48:08.825892  295049 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:48:08.826003  295049 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 09:48:08.826397  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:48:08.826601  295049 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:48:08.826750  295049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:48:08.827038  295049 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:48:08.827077  295049 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:48:08.827159  295049 addons.go:70] Setting yakd=true in profile "addons-714840"
	I1101 09:48:08.827179  295049 addons.go:239] Setting addon yakd=true in "addons-714840"
	I1101 09:48:08.827203  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.827706  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.828069  295049 addons.go:70] Setting metrics-server=true in profile "addons-714840"
	I1101 09:48:08.828092  295049 addons.go:239] Setting addon metrics-server=true in "addons-714840"
	I1101 09:48:08.828118  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.828560  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.828717  295049 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-714840"
	I1101 09:48:08.828734  295049 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-714840"
	I1101 09:48:08.828754  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.829169  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.832100  295049 addons.go:70] Setting registry=true in profile "addons-714840"
	I1101 09:48:08.832138  295049 addons.go:239] Setting addon registry=true in "addons-714840"
	I1101 09:48:08.832173  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.832687  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.832844  295049 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-714840"
	I1101 09:48:08.832883  295049 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-714840"
	I1101 09:48:08.832956  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.834213  295049 addons.go:70] Setting registry-creds=true in profile "addons-714840"
	I1101 09:48:08.834244  295049 addons.go:239] Setting addon registry-creds=true in "addons-714840"
	I1101 09:48:08.834278  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.834592  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.834683  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.852848  295049 addons.go:70] Setting cloud-spanner=true in profile "addons-714840"
	I1101 09:48:08.852964  295049 addons.go:239] Setting addon cloud-spanner=true in "addons-714840"
	I1101 09:48:08.853033  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.853558  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.856096  295049 addons.go:70] Setting storage-provisioner=true in profile "addons-714840"
	I1101 09:48:08.856175  295049 addons.go:239] Setting addon storage-provisioner=true in "addons-714840"
	I1101 09:48:08.856242  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.856854  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.873281  295049 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-714840"
	I1101 09:48:08.873313  295049 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-714840"
	I1101 09:48:08.873339  295049 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-714840"
	I1101 09:48:08.873351  295049 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-714840"
	I1101 09:48:08.873379  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.873667  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.873815  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.883201  295049 addons.go:70] Setting default-storageclass=true in profile "addons-714840"
	I1101 09:48:08.883241  295049 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-714840"
	I1101 09:48:08.883610  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.887780  295049 addons.go:70] Setting volcano=true in profile "addons-714840"
	I1101 09:48:08.887823  295049 addons.go:239] Setting addon volcano=true in "addons-714840"
	I1101 09:48:08.887861  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.888825  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.899697  295049 addons.go:70] Setting gcp-auth=true in profile "addons-714840"
	I1101 09:48:08.899740  295049 mustload.go:66] Loading cluster: addons-714840
	I1101 09:48:08.899972  295049 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:48:08.900230  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.905014  295049 addons.go:70] Setting volumesnapshots=true in profile "addons-714840"
	I1101 09:48:08.905054  295049 addons.go:239] Setting addon volumesnapshots=true in "addons-714840"
	I1101 09:48:08.905091  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.905570  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.907383  295049 out.go:179] * Verifying Kubernetes components...
	I1101 09:48:08.927577  295049 addons.go:70] Setting ingress=true in profile "addons-714840"
	I1101 09:48:08.927614  295049 addons.go:239] Setting addon ingress=true in "addons-714840"
	I1101 09:48:08.927663  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.928144  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.949201  295049 addons.go:70] Setting ingress-dns=true in profile "addons-714840"
	I1101 09:48:08.949239  295049 addons.go:239] Setting addon ingress-dns=true in "addons-714840"
	I1101 09:48:08.949282  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.949779  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.977468  295049 addons.go:70] Setting inspektor-gadget=true in profile "addons-714840"
	I1101 09:48:08.977501  295049 addons.go:239] Setting addon inspektor-gadget=true in "addons-714840"
	I1101 09:48:08.977538  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.977996  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.981794  295049 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:48:08.987024  295049 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:48:08.995032  295049 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:48:08.995112  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:48:08.995221  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.004518  295049 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:48:09.007494  295049 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:48:09.007519  295049 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:48:09.007590  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.020937  295049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:48:09.021638  295049 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 09:48:09.024786  295049 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-714840"
	I1101 09:48:09.024903  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:09.025939  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:09.044639  295049 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:48:09.045266  295049 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:48:09.052816  295049 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:48:09.056069  295049 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:48:09.056191  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.053648  295049 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:48:09.085180  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:48:09.087143  295049 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:48:09.087343  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:48:09.087658  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.093629  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:48:09.055823  295049 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:48:09.094765  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:48:09.094844  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.055850  295049 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:48:09.114208  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:48:09.114280  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.125699  295049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:48:09.126252  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:48:09.129840  295049 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1101 09:48:09.139214  295049 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:48:09.139365  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:48:09.153718  295049 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:48:09.153795  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:48:09.153892  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.171825  295049 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:48:09.171885  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:48:09.176316  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.178907  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:48:09.181857  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:48:09.185498  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:48:09.188357  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:48:09.194027  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:48:09.194115  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:48:09.194210  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.223202  295049 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 09:48:09.229337  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:48:09.230318  295049 addons.go:239] Setting addon default-storageclass=true in "addons-714840"
	I1101 09:48:09.230360  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:09.230903  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:09.237476  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:48:09.237498  295049 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:48:09.237572  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.265382  295049 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:48:09.268217  295049 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:48:09.268241  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:48:09.268308  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.284877  295049 host.go:66] Checking if "addons-714840" exists ...
	W1101 09:48:09.286719  295049 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:48:09.297832  295049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:48:09.297965  295049 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:48:09.302462  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.306406  295049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:48:09.307752  295049 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:48:09.310704  295049 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:48:09.310728  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:48:09.310803  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.312694  295049 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:48:09.312718  295049 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:48:09.312794  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.322817  295049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:48:09.323116  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.326201  295049 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:48:09.326219  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:48:09.326281  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.357130  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.372818  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.373674  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.399008  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.399029  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.438085  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.461509  295049 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:48:09.461533  295049 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:48:09.461599  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.463050  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.480247  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.488871  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.504208  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.519789  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.519855  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	W1101 09:48:09.522342  295049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:48:09.522391  295049 retry.go:31] will retry after 133.988056ms: ssh: handshake failed: EOF
	I1101 09:48:09.533810  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	W1101 09:48:09.535691  295049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:48:09.535714  295049 retry.go:31] will retry after 164.928826ms: ssh: handshake failed: EOF
	I1101 09:48:09.614214  295049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1101 09:48:09.708114  295049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:48:09.708147  295049 retry.go:31] will retry after 222.486304ms: ssh: handshake failed: EOF
	I1101 09:48:10.072314  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:48:10.072340  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:48:10.084118  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:48:10.149757  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:48:10.149848  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:48:10.191564  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:48:10.198987  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:48:10.216258  295049 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:48:10.216323  295049 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:48:10.236736  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:48:10.250007  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:48:10.250083  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:48:10.254066  295049 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:48:10.254137  295049 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:48:10.298643  295049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:48:10.298727  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:48:10.309494  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:48:10.314401  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:48:10.320905  295049 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:10.321154  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:48:10.336579  295049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:48:10.336652  295049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:48:10.363204  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:48:10.364683  295049 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.238926314s)
	I1101 09:48:10.364748  295049 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 09:48:10.366515  295049 node_ready.go:35] waiting up to 6m0s for node "addons-714840" to be "Ready" ...
	I1101 09:48:10.406283  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:48:10.414073  295049 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:48:10.414147  295049 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:48:10.417003  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:48:10.447223  295049 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:48:10.447302  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:48:10.473979  295049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:48:10.474048  295049 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:48:10.549762  295049 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:48:10.549836  295049 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:48:10.551419  295049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:48:10.551481  295049 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:48:10.605107  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:48:10.605183  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:48:10.610410  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:10.648074  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:48:10.673710  295049 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:48:10.673778  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:48:10.676795  295049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:48:10.676866  295049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:48:10.693457  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:48:10.833862  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:48:10.837889  295049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:48:10.837963  295049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:48:10.844192  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:48:10.844217  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:48:10.869045  295049 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-714840" context rescaled to 1 replicas
	I1101 09:48:11.011118  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:48:11.011144  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:48:11.036667  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:48:11.036694  295049 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:48:11.343272  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:48:11.343304  295049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:48:11.398963  295049 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:48:11.398994  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:48:11.412717  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.32851794s)
	I1101 09:48:11.617208  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:48:11.617282  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:48:11.659699  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:48:11.711489  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.519837078s)
	I1101 09:48:11.801403  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:48:11.801482  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:48:12.051120  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:48:12.051196  295049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:48:12.177086  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1101 09:48:12.383894  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:13.020603  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.821525859s)
	I1101 09:48:13.020717  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.783911931s)
	I1101 09:48:13.586990  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.277405209s)
	I1101 09:48:13.587104  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.272627556s)
	I1101 09:48:14.214156  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.850872015s)
	W1101 09:48:14.412142  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:15.365966  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.948878098s)
	I1101 09:48:15.366197  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.755502829s)
	W1101 09:48:15.366217  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:15.366233  295049 retry.go:31] will retry after 136.638648ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:15.366293  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.718145099s)
	I1101 09:48:15.366303  295049 addons.go:480] Verifying addon metrics-server=true in "addons-714840"
	I1101 09:48:15.366333  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.672802413s)
	I1101 09:48:15.366341  295049 addons.go:480] Verifying addon registry=true in "addons-714840"
	I1101 09:48:15.366602  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.9602943s)
	I1101 09:48:15.366724  295049 addons.go:480] Verifying addon ingress=true in "addons-714840"
	I1101 09:48:15.367048  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.533110877s)
	I1101 09:48:15.367385  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.707602549s)
	W1101 09:48:15.368671  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:48:15.368696  295049 retry.go:31] will retry after 348.42652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:48:15.369582  295049 out.go:179] * Verifying registry addon...
	I1101 09:48:15.369613  295049 out.go:179] * Verifying ingress addon...
	I1101 09:48:15.371487  295049 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-714840 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:48:15.375262  295049 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:48:15.375327  295049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:48:15.406452  295049 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:48:15.406472  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:15.406950  295049 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:48:15.406966  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:15.503690  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:15.713204  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.536019399s)
	I1101 09:48:15.713239  295049 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-714840"
	I1101 09:48:15.716600  295049 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:48:15.717877  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:48:15.721704  295049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:48:15.734406  295049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:48:15.734480  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:15.881249  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:15.881674  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:16.229221  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:16.379797  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:16.380479  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:16.625750  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.121972438s)
	W1101 09:48:16.625786  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:16.625807  295049 retry.go:31] will retry after 542.876452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:16.725738  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:16.869980  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:16.881703  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:16.881764  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:17.018829  295049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:48:17.018961  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:17.037189  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:17.162371  295049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:48:17.169674  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:17.176857  295049 addons.go:239] Setting addon gcp-auth=true in "addons-714840"
	I1101 09:48:17.177003  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:17.177474  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:17.203647  295049 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:48:17.203709  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:17.226251  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:17.226354  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:17.379945  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:17.380420  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:17.725629  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:17.880360  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:17.880708  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:48:18.020686  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:18.020802  295049 retry.go:31] will retry after 313.866685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:18.023926  295049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:48:18.026901  295049 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:48:18.029822  295049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:48:18.029860  295049 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:48:18.044444  295049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:48:18.044532  295049 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:48:18.059372  295049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:48:18.059450  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:48:18.073985  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:48:18.224723  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:18.335077  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:18.381075  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:18.381492  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:18.637384  295049 addons.go:480] Verifying addon gcp-auth=true in "addons-714840"
	I1101 09:48:18.640975  295049 out.go:179] * Verifying gcp-auth addon...
	I1101 09:48:18.644589  295049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:48:18.649201  295049 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:48:18.649270  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:18.750157  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:18.870476  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:18.879708  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:18.880757  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:19.148686  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:19.225902  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:19.282933  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:19.282965  295049 retry.go:31] will retry after 1.138525801s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:19.379160  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:19.379338  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:19.648566  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:19.725694  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:19.879343  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:19.879667  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:20.147823  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:20.225529  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:20.378800  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:20.379095  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:20.422383  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:20.647814  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:20.725040  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:20.879477  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:20.879556  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:21.147600  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:48:21.221797  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:21.221830  295049 retry.go:31] will retry after 1.895111913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:21.224232  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:21.370069  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:21.379334  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:21.379622  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:21.647474  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:21.725405  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:21.878971  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:21.879267  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:22.149236  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:22.225128  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:22.379352  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:22.379507  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:22.649187  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:22.724969  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:22.879475  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:22.879630  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:23.117974  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:23.148176  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:23.225958  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:23.370187  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:23.381092  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:23.381487  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:23.647805  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:23.725459  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:23.881214  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:23.881478  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:48:23.927708  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:23.927740  295049 retry.go:31] will retry after 1.237875137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:24.147953  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:24.224907  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:24.380574  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:24.381008  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:24.648486  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:24.725968  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:24.879508  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:24.879567  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:25.148261  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:25.166413  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:25.225319  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:25.371947  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:25.379885  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:25.380323  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:25.647953  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:25.725497  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:25.881103  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:25.881532  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:48:25.978901  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:25.978934  295049 retry.go:31] will retry after 1.740039919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:26.147968  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:26.224733  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:26.378900  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:26.379048  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:26.647985  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:26.725286  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:26.880818  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:26.881464  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:27.147498  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:27.225299  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:27.381053  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:27.381451  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:27.648547  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:27.719686  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:27.725296  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:27.869795  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:27.880100  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:27.880445  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:28.148060  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:28.225804  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:28.379211  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:28.379558  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:48:28.522635  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:28.522676  295049 retry.go:31] will retry after 6.367920624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:28.647521  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:28.725674  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:28.878141  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:28.878507  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:29.147630  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:29.225698  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:29.378602  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:29.378812  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:29.647873  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:29.724504  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:29.870175  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:29.879419  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:29.879816  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:30.148238  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:30.225400  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:30.379725  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:30.379999  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:30.648744  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:30.724556  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:30.879131  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:30.879450  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:31.148441  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:31.225521  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:31.378989  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:31.379076  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:31.648200  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:31.725467  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:31.870453  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:31.878815  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:31.878975  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:32.148125  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:32.224820  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:32.378782  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:32.378976  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:32.648163  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:32.725063  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:32.879485  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:32.879492  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:33.148325  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:33.225559  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:33.378794  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:33.379081  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:33.648122  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:33.724711  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:33.878638  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:33.878784  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:34.147959  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:34.225409  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:34.370553  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:34.382200  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:34.389899  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:34.648170  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:34.725598  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:34.879315  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:34.879414  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:34.891612  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:35.148032  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:35.224903  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:35.380593  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:35.381020  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:35.649054  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:48:35.716994  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:35.717026  295049 retry.go:31] will retry after 7.523911616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:35.725411  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:35.878140  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:35.878955  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:36.147985  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:36.224991  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:36.379952  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:36.379986  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:36.648126  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:36.724794  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:36.869386  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:36.880310  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:36.880606  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:37.147720  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:37.225508  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:37.379149  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:37.379507  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:37.647511  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:37.725406  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:37.879279  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:37.879760  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:38.147634  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:38.225536  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:38.378967  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:38.379113  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:38.648140  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:38.725117  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:38.870210  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:38.879414  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:38.879611  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:39.147733  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:39.224889  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:39.378440  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:39.379046  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:39.649182  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:39.725004  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:39.879271  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:39.879649  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:40.148052  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:40.225036  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:40.379576  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:40.379616  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:40.648265  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:40.725200  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:40.870652  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:40.879361  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:40.879650  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:41.147356  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:41.225531  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:41.378787  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:41.378892  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:41.647933  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:41.724996  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:41.878727  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:41.878877  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:42.148689  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:42.225918  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:42.378415  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:42.378625  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:42.648268  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:42.725528  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:42.878409  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:42.878564  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:43.147653  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:43.229909  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:43.242081  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:48:43.370380  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:43.378710  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:43.378926  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:43.648310  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:43.727192  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:43.879888  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:43.879993  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:48:44.062971  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:44.063004  295049 retry.go:31] will retry after 8.722094097s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:44.147929  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:44.224729  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:44.378321  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:44.378547  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:44.647503  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:44.725196  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:44.880030  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:44.880174  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:45.150221  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:45.225222  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:45.373340  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:45.379420  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:45.379903  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:45.647906  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:45.724710  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:45.879466  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:45.880025  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:46.147952  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:46.226877  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:46.378660  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:46.378912  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:46.648311  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:46.725338  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:46.879215  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:46.880882  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:47.148059  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:47.225451  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:47.379589  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:47.380038  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:47.648054  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:47.724865  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:47.870414  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:47.879299  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:47.879551  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:48.147619  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:48.225332  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:48.379524  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:48.379672  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:48.647917  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:48.724880  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:48.879432  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:48.879543  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:49.147836  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:49.225251  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:49.379035  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:49.379118  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:49.648240  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:49.725400  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:49.879243  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:49.881163  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:50.148233  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:50.225285  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:50.370082  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:50.378896  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:50.379083  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:50.648274  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:50.725271  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:50.879964  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:50.880050  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:51.148636  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:51.225575  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:51.379190  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:51.379332  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:51.648371  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:51.725047  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:51.878931  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:51.878940  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:52.148136  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:52.225190  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:52.370973  295049 node_ready.go:49] node "addons-714840" is "Ready"
	I1101 09:48:52.371013  295049 node_ready.go:38] duration metric: took 42.004277348s for node "addons-714840" to be "Ready" ...
	I1101 09:48:52.371027  295049 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:48:52.371134  295049 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:48:52.395011  295049 api_server.go:72] duration metric: took 43.568376456s to wait for apiserver process to appear ...
	I1101 09:48:52.395094  295049 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:48:52.395137  295049 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 09:48:52.412796  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:52.413015  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:52.432551  295049 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 09:48:52.440655  295049 api_server.go:141] control plane version: v1.34.1
	I1101 09:48:52.440734  295049 api_server.go:131] duration metric: took 45.610034ms to wait for apiserver health ...
	I1101 09:48:52.440759  295049 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:48:52.481851  295049 system_pods.go:59] 18 kube-system pods found
	I1101 09:48:52.482022  295049 system_pods.go:61] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending
	I1101 09:48:52.482046  295049 system_pods.go:61] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending
	I1101 09:48:52.482113  295049 system_pods.go:61] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending
	I1101 09:48:52.482144  295049 system_pods.go:61] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:52.482182  295049 system_pods.go:61] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:52.482227  295049 system_pods.go:61] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:52.482312  295049 system_pods.go:61] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:52.482338  295049 system_pods.go:61] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending
	I1101 09:48:52.482360  295049 system_pods.go:61] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:52.482397  295049 system_pods.go:61] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:52.482479  295049 system_pods.go:61] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending
	I1101 09:48:52.482507  295049 system_pods.go:61] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending
	I1101 09:48:52.482528  295049 system_pods.go:61] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending
	I1101 09:48:52.482565  295049 system_pods.go:61] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending
	I1101 09:48:52.482650  295049 system_pods.go:61] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending
	I1101 09:48:52.482677  295049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending
	I1101 09:48:52.482723  295049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending
	I1101 09:48:52.482746  295049 system_pods.go:61] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending
	I1101 09:48:52.482814  295049 system_pods.go:74] duration metric: took 42.033793ms to wait for pod list to return data ...
	I1101 09:48:52.482841  295049 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:48:52.522030  295049 default_sa.go:45] found service account: "default"
	I1101 09:48:52.522107  295049 default_sa.go:55] duration metric: took 39.225238ms for default service account to be created ...
	I1101 09:48:52.522146  295049 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:48:52.551484  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:52.551566  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending
	I1101 09:48:52.551588  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending
	I1101 09:48:52.551610  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending
	I1101 09:48:52.551645  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending
	I1101 09:48:52.551674  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:52.551698  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:52.551736  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:52.551761  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:52.551782  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending
	I1101 09:48:52.551817  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:52.551842  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:52.551863  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending
	I1101 09:48:52.551899  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending
	I1101 09:48:52.551924  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending
	I1101 09:48:52.551951  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:52.551986  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending
	I1101 09:48:52.552012  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending
	I1101 09:48:52.552035  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending
	I1101 09:48:52.552069  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending
	I1101 09:48:52.552102  295049 retry.go:31] will retry after 212.35502ms: missing components: kube-dns
	I1101 09:48:52.654070  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:52.786070  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:52.809713  295049 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:48:52.809789  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:52.819163  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:52.819250  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:48:52.819272  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending
	I1101 09:48:52.819296  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending
	I1101 09:48:52.819329  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending
	I1101 09:48:52.819353  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:52.819385  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:52.819419  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:52.819444  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:52.819465  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending
	I1101 09:48:52.819499  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:52.819532  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:52.819556  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:48:52.819588  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending
	I1101 09:48:52.819614  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:48:52.819642  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:52.819675  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending
	I1101 09:48:52.819704  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:52.819725  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending
	I1101 09:48:52.819760  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending
	I1101 09:48:52.819797  295049 retry.go:31] will retry after 238.204487ms: missing components: kube-dns
	I1101 09:48:52.914771  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:52.917278  295049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:48:52.917300  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:53.065935  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:53.066024  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:48:53.066050  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:48:53.066099  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:48:53.066123  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending
	I1101 09:48:53.066145  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:53.066180  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:53.066205  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:53.066227  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:53.066269  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:48:53.066317  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:53.066352  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:53.066382  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:48:53.066403  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending
	I1101 09:48:53.066443  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:48:53.066469  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:53.066491  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending
	I1101 09:48:53.066535  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:53.066561  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending
	I1101 09:48:53.066584  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:48:53.066630  295049 retry.go:31] will retry after 414.475783ms: missing components: kube-dns
	I1101 09:48:53.159758  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:53.234104  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:53.379999  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:53.380167  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:53.487796  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:53.487888  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:48:53.487913  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:48:53.487953  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:48:53.487977  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:48:53.487997  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:53.488029  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:53.488053  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:53.488074  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:53.488113  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:48:53.488138  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:53.488160  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:53.488199  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:48:53.488226  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:48:53.488252  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:48:53.488294  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:53.488315  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:48:53.488353  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:53.488379  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:53.488400  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:48:53.488447  295049 retry.go:31] will retry after 575.227137ms: missing components: kube-dns
	I1101 09:48:53.658223  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:53.756356  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:53.880043  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:53.880512  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:54.070836  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:54.070922  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:48:54.070949  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:48:54.070991  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:48:54.071021  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:48:54.071043  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:54.071081  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:54.071105  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:54.071126  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:54.071165  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:48:54.071188  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:54.071212  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:54.071250  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:48:54.071279  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:48:54.071306  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:48:54.071346  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:54.071368  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:48:54.071407  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:54.071433  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:54.071456  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:48:54.071502  295049 retry.go:31] will retry after 507.349859ms: missing components: kube-dns
	I1101 09:48:54.149118  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:54.225426  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:54.380537  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:54.380493  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:54.413230  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.6270613s)
	W1101 09:48:54.413319  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:54.413354  295049 retry.go:31] will retry after 10.756894019s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:54.593635  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:54.593718  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Running
	I1101 09:48:54.593745  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:48:54.593788  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:48:54.593813  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:48:54.593831  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:54.593852  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:54.593884  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:54.593909  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:54.593933  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:48:54.593968  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:54.593994  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:54.594024  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:48:54.594079  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:48:54.594107  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:48:54.594130  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:54.594166  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:48:54.594192  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:54.594225  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:54.594270  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:48:54.594295  295049 system_pods.go:126] duration metric: took 2.072124766s to wait for k8s-apps to be running ...
	I1101 09:48:54.594329  295049 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:48:54.594422  295049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:48:54.619374  295049 system_svc.go:56] duration metric: took 25.034852ms WaitForService to wait for kubelet
	I1101 09:48:54.619450  295049 kubeadm.go:587] duration metric: took 45.792819992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:48:54.619494  295049 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:48:54.622914  295049 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:48:54.622996  295049 node_conditions.go:123] node cpu capacity is 2
	I1101 09:48:54.623023  295049 node_conditions.go:105] duration metric: took 3.511174ms to run NodePressure ...
	I1101 09:48:54.623049  295049 start.go:242] waiting for startup goroutines ...
	I1101 09:48:54.691910  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:54.725824  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:54.879733  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:54.879853  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:55.148557  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:55.249203  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:55.380866  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:55.381292  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:55.648632  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:55.725532  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:55.883117  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:55.883589  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:56.147880  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:56.227229  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:56.384762  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:56.385609  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:56.648373  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:56.729362  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:56.885285  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:56.886019  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:57.148271  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:57.227130  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:57.381095  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:57.381313  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:57.650241  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:57.750702  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:57.893168  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:57.893488  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:58.147727  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:58.225932  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:58.380720  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:58.380711  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:58.648293  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:58.726356  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:58.879734  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:58.880771  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:59.147893  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:59.224866  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:59.379571  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:59.380356  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:59.647958  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:59.725432  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:59.879298  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:59.880219  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:00.152463  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:00.239649  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:00.392413  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:00.426980  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:00.649257  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:00.726047  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:00.880864  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:00.881272  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:01.148722  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:01.225154  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:01.381149  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:01.381973  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:01.649018  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:01.726967  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:01.881272  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:01.881773  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:02.148094  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:02.224917  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:02.380478  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:02.380663  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:02.648583  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:02.726140  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:02.879871  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:02.880230  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:03.148683  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:03.250690  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:03.379192  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:03.379346  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:03.648902  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:03.725887  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:03.881270  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:03.881558  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:04.147422  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:04.225984  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:04.379712  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:04.381082  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:04.648232  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:04.726312  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:04.879901  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:04.879998  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:05.149010  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:05.171273  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:49:05.224762  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:05.380193  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:05.380707  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:05.675707  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:05.770071  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:05.880798  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:05.881364  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:06.149203  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:06.226123  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:06.379170  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:06.379411  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:06.526232  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.354920353s)
	W1101 09:49:06.526312  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:49:06.526345  295049 retry.go:31] will retry after 19.510029492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:49:06.648370  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:06.725655  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:06.883386  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:06.884261  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:07.148723  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:07.225090  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:07.380388  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:07.381052  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:07.675580  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:07.763908  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:07.878488  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:07.879483  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:08.148203  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:08.226447  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:08.381932  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:08.382471  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:08.648282  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:08.726004  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:08.880832  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:08.881108  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:09.148280  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:09.226017  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:09.380255  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:09.380854  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:09.648477  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:09.726153  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:09.879155  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:09.879808  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:10.148232  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:10.225977  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:10.380363  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:10.380778  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:10.648045  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:10.725413  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:10.879701  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:10.879965  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:11.148123  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:11.225984  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:11.380942  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:11.381193  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:11.648686  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:11.725702  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:11.881029  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:11.881540  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:12.147797  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:12.225631  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:12.379235  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:12.380101  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:12.649908  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:12.725230  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:12.879687  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:12.879954  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:13.148306  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:13.225825  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:13.382038  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:13.382414  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:13.648439  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:13.726785  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:13.879513  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:13.879686  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:14.148471  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:14.226326  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:14.379977  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:14.380592  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:14.649431  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:14.726046  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:14.880387  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:14.881111  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:15.148951  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:15.225507  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:15.381645  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:15.381810  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:15.648259  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:15.725775  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:15.880554  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:15.881366  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:16.147908  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:16.225559  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:16.380113  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:16.380622  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:16.648003  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:16.725391  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:16.882937  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:16.882940  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:17.147851  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:17.224527  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:17.379537  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:17.379719  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:17.648616  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:17.725757  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:17.883769  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:17.884278  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:18.148759  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:18.250454  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:18.379737  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:18.381477  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:18.650268  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:18.739035  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:18.880970  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:18.881161  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:19.148545  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:19.225733  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:19.379597  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:19.379784  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:19.648568  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:19.729927  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:19.879938  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:19.880636  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:20.147843  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:20.225238  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:20.380631  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:20.380786  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:20.647885  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:20.725411  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:20.879665  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:20.879803  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:21.148371  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:21.226345  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:21.378879  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:21.379586  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:21.647892  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:21.725541  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:21.879849  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:21.880325  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:22.148020  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:22.226205  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:22.380117  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:22.380216  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:22.650129  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:22.726305  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:22.885370  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:22.886733  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:23.148514  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:23.226878  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:23.381099  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:23.381569  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:23.647792  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:23.725435  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:23.880453  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:23.880845  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:24.147532  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:24.225791  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:24.380277  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:24.380765  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:24.651502  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:24.726161  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:24.881488  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:24.881981  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:25.148493  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:25.225879  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:25.379032  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:25.380493  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:25.648820  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:25.728518  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:25.879674  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:25.880113  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:26.037417  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:49:26.148146  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:26.226862  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:26.380049  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:26.380226  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:26.649348  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:26.726590  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:26.880278  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:26.880573  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:27.129350  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.091888914s)
	W1101 09:49:27.129387  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:49:27.129407  295049 retry.go:31] will retry after 31.459578892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:49:27.148471  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:27.226158  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:27.378709  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:27.379506  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:27.648001  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:27.725581  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:27.881783  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:27.882198  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:28.148623  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:28.225708  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:28.380258  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:28.380436  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:28.647525  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:28.726058  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:28.880666  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:28.880797  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:29.148066  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:29.230961  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:29.379510  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:29.380004  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:29.647966  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:29.725215  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:29.878950  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:29.879272  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:30.148586  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:30.226604  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:30.379232  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:30.379630  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:30.648069  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:30.726512  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:30.879844  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:30.880527  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:31.148507  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:31.225803  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:31.381190  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:31.382478  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:31.647866  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:31.725736  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:31.879699  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:31.879893  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:32.147862  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:32.226841  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:32.379683  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:32.380140  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:32.648714  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:32.749856  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:32.879381  295049 kapi.go:107] duration metric: took 1m17.504049153s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:49:32.879568  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:33.150311  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:33.252332  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:33.378583  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:33.648676  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:33.726204  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:33.878296  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:34.149330  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:34.225956  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:34.379267  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:34.650592  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:34.728563  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:34.879421  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:35.148886  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:35.225922  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:35.379357  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:35.648394  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:35.726346  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:35.879253  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:36.149117  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:36.226418  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:36.379017  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:36.648069  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:36.725736  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:36.881135  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:37.148400  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:37.225867  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:37.378820  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:37.648465  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:37.726021  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:37.878851  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:38.148801  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:38.226155  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:38.380796  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:38.650394  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:38.728295  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:38.878697  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:39.147765  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:39.225294  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:39.379344  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:39.647476  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:39.748669  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:39.879782  295049 kapi.go:107] duration metric: took 1m24.504523441s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:49:40.148772  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:40.225268  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:40.649348  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:40.726305  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:41.148651  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:41.225536  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:41.648057  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:41.725887  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:42.151203  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:42.225557  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:42.647673  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:42.726538  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:43.151246  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:43.250433  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:43.648453  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:43.727249  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:44.148257  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:44.225934  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:44.651167  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:44.758740  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:45.163045  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:45.261704  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:45.647603  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:45.725780  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:46.149565  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:46.226318  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:46.648134  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:46.725514  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:47.148331  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:47.249161  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:47.648824  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:47.725602  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:48.148356  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:48.226417  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:48.648588  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:48.751215  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:49.148623  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:49.225922  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:49.647501  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:49.725677  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:50.149725  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:50.225206  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:50.647765  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:50.725252  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:51.148179  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:51.225403  295049 kapi.go:107] duration metric: took 1m35.503695445s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:49:51.647697  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:52.148747  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:52.649036  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:53.147592  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:53.648750  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:54.148678  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:54.648894  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:55.148522  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:55.647906  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:56.148822  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:56.648512  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:57.148175  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:57.648127  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:58.147795  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:58.589188  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:49:58.648476  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:59.151377  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:59.649114  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:59.984830  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.395595613s)
	W1101 09:49:59.984881  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:49:59.985001  295049 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:50:00.149784  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:50:00.650355  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:50:01.148492  295049 kapi.go:107] duration metric: took 1m42.503902767s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:50:01.151525  295049 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-714840 cluster.
	I1101 09:50:01.154273  295049 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:50:01.156904  295049 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:50:01.159930  295049 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, ingress-dns, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1101 09:50:01.162842  295049 addons.go:515] duration metric: took 1m52.335736601s for enable addons: enabled=[nvidia-device-plugin registry-creds ingress-dns amd-gpu-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1101 09:50:01.162913  295049 start.go:247] waiting for cluster config update ...
	I1101 09:50:01.162937  295049 start.go:256] writing updated cluster config ...
	I1101 09:50:01.163254  295049 ssh_runner.go:195] Run: rm -f paused
	I1101 09:50:01.167625  295049 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:50:01.171736  295049 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jxfw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.177952  295049 pod_ready.go:94] pod "coredns-66bc5c9577-jxfw2" is "Ready"
	I1101 09:50:01.177986  295049 pod_ready.go:86] duration metric: took 6.218377ms for pod "coredns-66bc5c9577-jxfw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.181353  295049 pod_ready.go:83] waiting for pod "etcd-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.187111  295049 pod_ready.go:94] pod "etcd-addons-714840" is "Ready"
	I1101 09:50:01.187147  295049 pod_ready.go:86] duration metric: took 5.758403ms for pod "etcd-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.189783  295049 pod_ready.go:83] waiting for pod "kube-apiserver-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.195901  295049 pod_ready.go:94] pod "kube-apiserver-addons-714840" is "Ready"
	I1101 09:50:01.195936  295049 pod_ready.go:86] duration metric: took 6.120358ms for pod "kube-apiserver-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.198818  295049 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.572385  295049 pod_ready.go:94] pod "kube-controller-manager-addons-714840" is "Ready"
	I1101 09:50:01.572424  295049 pod_ready.go:86] duration metric: took 373.574477ms for pod "kube-controller-manager-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.771953  295049 pod_ready.go:83] waiting for pod "kube-proxy-jkzc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:02.172312  295049 pod_ready.go:94] pod "kube-proxy-jkzc6" is "Ready"
	I1101 09:50:02.172341  295049 pod_ready.go:86] duration metric: took 400.361119ms for pod "kube-proxy-jkzc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:02.371946  295049 pod_ready.go:83] waiting for pod "kube-scheduler-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:02.772269  295049 pod_ready.go:94] pod "kube-scheduler-addons-714840" is "Ready"
	I1101 09:50:02.772299  295049 pod_ready.go:86] duration metric: took 400.323391ms for pod "kube-scheduler-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:02.772312  295049 pod_ready.go:40] duration metric: took 1.6046497s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:50:02.838511  295049 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:50:02.841638  295049 out.go:179] * Done! kubectl is now configured to use "addons-714840" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:53:05 addons-714840 crio[829]: time="2025-11-01T09:53:05.746236675Z" level=info msg="Removed container b26dc3221238718c93e7db4fecf93355f41ffe616b82ca3e02e57b9a18c239de: kube-system/registry-creds-764b6fb674-bnkwh/registry-creds" id=10f35c46-6332-469d-983e-d81b270b3081 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.624303396Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-6qr6f/POD" id=864bf225-91e1-4194-9c05-14df7d4c2b81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.624387967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.636828806Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6qr6f Namespace:default ID:5143679c53627fd11eb154bb665da4ad679f66d43581cc50840e4c2780b6cc86 UID:fac6ff5f-c753-4e44-bd48-c6dacc716485 NetNS:/var/run/netns/366046f1-c754-40cb-a0cf-d0ebdc0051a0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40011b1320}] Aliases:map[]}"
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.636874993Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-6qr6f to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.651434363Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6qr6f Namespace:default ID:5143679c53627fd11eb154bb665da4ad679f66d43581cc50840e4c2780b6cc86 UID:fac6ff5f-c753-4e44-bd48-c6dacc716485 NetNS:/var/run/netns/366046f1-c754-40cb-a0cf-d0ebdc0051a0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40011b1320}] Aliases:map[]}"
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.651752421Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-6qr6f for CNI network kindnet (type=ptp)"
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.65965749Z" level=info msg="Ran pod sandbox 5143679c53627fd11eb154bb665da4ad679f66d43581cc50840e4c2780b6cc86 with infra container: default/hello-world-app-5d498dc89-6qr6f/POD" id=864bf225-91e1-4194-9c05-14df7d4c2b81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.660859927Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=098ea868-b981-4a8c-b13e-e034344ac007 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.661404916Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=098ea868-b981-4a8c-b13e-e034344ac007 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.661524794Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=098ea868-b981-4a8c-b13e-e034344ac007 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.662378814Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=51e6f78c-9f8a-4af6-b46f-7c99e84c6072 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:53:08 addons-714840 crio[829]: time="2025-11-01T09:53:08.66387142Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.369950196Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=51e6f78c-9f8a-4af6-b46f-7c99e84c6072 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.370993962Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=68a993f2-58e2-4a73-8dd1-b82557b106d3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.373170389Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a8046490-d860-4f7f-b952-dbd36ecab276 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.38108783Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-6qr6f/hello-world-app" id=2c77cc44-7d6c-4a92-9a4c-4e9ad4f78dc6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.38152488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.389352417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.389558417Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f52bca828f769231bbd490158672db085cfeeb49e2e2e1243b0991df844df16e/merged/etc/passwd: no such file or directory"
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.389581367Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f52bca828f769231bbd490158672db085cfeeb49e2e2e1243b0991df844df16e/merged/etc/group: no such file or directory"
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.389858907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.416945222Z" level=info msg="Created container 0a83d2ec0da556800e79111bd56d28754b15c2fc81369ee3b0307c7c20790393: default/hello-world-app-5d498dc89-6qr6f/hello-world-app" id=2c77cc44-7d6c-4a92-9a4c-4e9ad4f78dc6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.42007935Z" level=info msg="Starting container: 0a83d2ec0da556800e79111bd56d28754b15c2fc81369ee3b0307c7c20790393" id=df95e847-dc32-4831-8f09-c311a9bd87ad name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:53:09 addons-714840 crio[829]: time="2025-11-01T09:53:09.42423358Z" level=info msg="Started container" PID=7202 containerID=0a83d2ec0da556800e79111bd56d28754b15c2fc81369ee3b0307c7c20790393 description=default/hello-world-app-5d498dc89-6qr6f/hello-world-app id=df95e847-dc32-4831-8f09-c311a9bd87ad name=/runtime.v1.RuntimeService/StartContainer sandboxID=5143679c53627fd11eb154bb665da4ad679f66d43581cc50840e4c2780b6cc86
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	0a83d2ec0da55       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   5143679c53627       hello-world-app-5d498dc89-6qr6f             default
	254cca6ba3a5c       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             5 seconds ago            Exited              registry-creds                           1                   c832cabf8d455       registry-creds-764b6fb674-bnkwh             kube-system
	8587fb68a4fa2       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   a62537e1bab6f       nginx                                       default
	9fd9d44fbc02e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   d5f668a69c058       busybox                                     default
	cfb354a027b1d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   39f70f641227a       gcp-auth-78565c9fb4-rfbql                   gcp-auth
	10a1c7de04e0d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                    kube-system
	4a127573889cd       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                    kube-system
	0d38db82f09d9       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                    kube-system
	7dcafd9990f60       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                    kube-system
	290dbe24a3813       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                    kube-system
	09545d05e577c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   9f997d17c414a       gadget-lhntn                                gadget
	9abf6837f0e01       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   017c407ac9a7f       ingress-nginx-controller-675c5ddd98-9bmq7   ingress-nginx
	57fd4de0c99ca       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   af2005f3c652a       registry-proxy-w2s6j                        kube-system
	57568bd94e7af       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   cd9c12bb7d3f7       registry-6b586f9694-czvz6                   kube-system
	8972f335d55fe       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   8bfe43ba216e4       csi-hostpath-resizer-0                      kube-system
	39741cf195269       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   85e40d0c7dd24       nvidia-device-plugin-daemonset-2t6gg        kube-system
	7a84a50fa7c2b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                    kube-system
	ef27f5b38a203       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   d1132e6497f59       local-path-provisioner-648f6765c9-bmh8h     local-path-storage
	3c062f36827d4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              patch                                    0                   4317d0e9a5be0       ingress-nginx-admission-patch-8mgj2         ingress-nginx
	7fbdc489ecd4a       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   08fc0c5637c5e       cloud-spanner-emulator-6f9fcf858b-jlz98     default
	656c40399f18d       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   89f2e06945e17       kube-ingress-dns-minikube                   kube-system
	68903857276a8       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   6ab199cf77d2c       csi-hostpath-attacher-0                     kube-system
	a1a58b7ec669a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   e98d252727bf9       snapshot-controller-7d9fbc56b8-gk5gb        kube-system
	f957f816c5c98       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              create                                   0                   d5c146311f97e       ingress-nginx-admission-create-99jl2        ingress-nginx
	b88b182078ac0       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   5d18a82873c9a       yakd-dashboard-5ff678cb9-9rb44              yakd-dashboard
	4fbf88d999b23       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   2bcf0ff74fa46       snapshot-controller-7d9fbc56b8-fzk67        kube-system
	678a88e760bce       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   0a46dac934b61       metrics-server-85b7d694d7-mshff             kube-system
	4e5de8a419785       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   387c0f82ee9d8       coredns-66bc5c9577-jxfw2                    kube-system
	c0ddb9895a9b9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   77761049a265d       storage-provisioner                         kube-system
	5b14178d10461       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   9a849f07a6959       kindnet-thg89                               kube-system
	6949baeb846a9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   f4c22e1ebe2a9       kube-proxy-jkzc6                            kube-system
	a35a59e2848f6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   7225d0238f3f8       kube-apiserver-addons-714840                kube-system
	15771f960cfb3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   6ede78832e2e8       kube-scheduler-addons-714840                kube-system
	17dd29eab394d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   c19a2f6c36fc7       kube-controller-manager-addons-714840       kube-system
	5fabe274c8207       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   7e7dd7e85c3a6       etcd-addons-714840                          kube-system
	
	
	==> coredns [4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc] <==
	[INFO] 10.244.0.11:42245 - 25443 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002073361s
	[INFO] 10.244.0.11:42245 - 63629 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000127615s
	[INFO] 10.244.0.11:42245 - 10187 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00008266s
	[INFO] 10.244.0.11:59289 - 15405 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000194898s
	[INFO] 10.244.0.11:59289 - 15643 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000283383s
	[INFO] 10.244.0.11:52453 - 18234 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000134721s
	[INFO] 10.244.0.11:52453 - 18021 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070531s
	[INFO] 10.244.0.11:46028 - 36384 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084424s
	[INFO] 10.244.0.11:46028 - 36195 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072s
	[INFO] 10.244.0.11:37955 - 60678 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.000834205s
	[INFO] 10.244.0.11:37955 - 61123 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001343444s
	[INFO] 10.244.0.11:37145 - 47621 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000119681s
	[INFO] 10.244.0.11:37145 - 47481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158541s
	[INFO] 10.244.0.21:43557 - 44662 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00022808s
	[INFO] 10.244.0.21:58360 - 24108 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000118803s
	[INFO] 10.244.0.21:52709 - 11444 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000177339s
	[INFO] 10.244.0.21:55031 - 34502 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000309056s
	[INFO] 10.244.0.21:38632 - 52286 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000183747s
	[INFO] 10.244.0.21:40253 - 63330 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106413s
	[INFO] 10.244.0.21:60503 - 60323 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002130667s
	[INFO] 10.244.0.21:60446 - 5681 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001965349s
	[INFO] 10.244.0.21:50287 - 55702 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001570031s
	[INFO] 10.244.0.21:38799 - 65285 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002212702s
	[INFO] 10.244.0.23:54429 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000203571s
	[INFO] 10.244.0.23:39702 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106306s
	
	
	==> describe nodes <==
	Name:               addons-714840
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-714840
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=addons-714840
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_48_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-714840
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-714840"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:48:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-714840
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:53:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:51:08 +0000   Sat, 01 Nov 2025 09:47:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:51:08 +0000   Sat, 01 Nov 2025 09:47:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:51:08 +0000   Sat, 01 Nov 2025 09:47:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:51:08 +0000   Sat, 01 Nov 2025 09:48:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-714840
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                f734a300-1b07-43a9-9d01-10886b98b0b1
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     cloud-spanner-emulator-6f9fcf858b-jlz98      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  default                     hello-world-app-5d498dc89-6qr6f              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-lhntn                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  gcp-auth                    gcp-auth-78565c9fb4-rfbql                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9bmq7    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m55s
	  kube-system                 coredns-66bc5c9577-jxfw2                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m1s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 csi-hostpathplugin-prqx4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 etcd-addons-714840                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m6s
	  kube-system                 kindnet-thg89                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m1s
	  kube-system                 kube-apiserver-addons-714840                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-controller-manager-addons-714840        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-proxy-jkzc6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-addons-714840                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 metrics-server-85b7d694d7-mshff              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m57s
	  kube-system                 nvidia-device-plugin-daemonset-2t6gg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 registry-6b586f9694-czvz6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 registry-creds-764b6fb674-bnkwh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 registry-proxy-w2s6j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 snapshot-controller-7d9fbc56b8-fzk67         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 snapshot-controller-7d9fbc56b8-gk5gb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  local-path-storage          local-path-provisioner-648f6765c9-bmh8h      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9rb44               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m58s                  kube-proxy       
	  Warning  CgroupV1                 5m14s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node addons-714840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node addons-714840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m14s (x8 over 5m14s)  kubelet          Node addons-714840 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m6s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m6s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m6s                   kubelet          Node addons-714840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m6s                   kubelet          Node addons-714840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m6s                   kubelet          Node addons-714840 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m2s                   node-controller  Node addons-714840 event: Registered Node addons-714840 in Controller
	  Normal   NodeReady                4m18s                  kubelet          Node addons-714840 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014607] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.506888] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032735] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.832337] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.644621] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 1 09:37] hrtimer: interrupt took 44045431 ns
	[Nov 1 09:38] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Nov 1 09:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 09:47] overlayfs: idmapped layers are currently not supported
	[  +0.058238] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8] <==
	{"level":"warn","ts":"2025-11-01T09:47:59.957752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:47:59.979805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:47:59.990384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.012472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.029684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.039896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.057885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.077132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.095571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.113233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.131595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.156186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.177029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.194473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.216898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.340561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.358188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.417693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.494835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:16.097563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:16.141811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:38.497027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:38.520688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:38.554878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:38.588383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46968","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [cfb354a027b1d301bf9c0c79ff5672bb199d5061a790e46b5677aca8a8307135] <==
	2025/11/01 09:50:00 GCP Auth Webhook started!
	2025/11/01 09:50:03 Ready to marshal response ...
	2025/11/01 09:50:03 Ready to write response ...
	2025/11/01 09:50:03 Ready to marshal response ...
	2025/11/01 09:50:03 Ready to write response ...
	2025/11/01 09:50:03 Ready to marshal response ...
	2025/11/01 09:50:03 Ready to write response ...
	2025/11/01 09:50:23 Ready to marshal response ...
	2025/11/01 09:50:23 Ready to write response ...
	2025/11/01 09:50:25 Ready to marshal response ...
	2025/11/01 09:50:25 Ready to write response ...
	2025/11/01 09:50:25 Ready to marshal response ...
	2025/11/01 09:50:25 Ready to write response ...
	2025/11/01 09:50:33 Ready to marshal response ...
	2025/11/01 09:50:33 Ready to write response ...
	2025/11/01 09:50:41 Ready to marshal response ...
	2025/11/01 09:50:41 Ready to write response ...
	2025/11/01 09:50:49 Ready to marshal response ...
	2025/11/01 09:50:49 Ready to write response ...
	2025/11/01 09:51:05 Ready to marshal response ...
	2025/11/01 09:51:05 Ready to write response ...
	2025/11/01 09:53:08 Ready to marshal response ...
	2025/11/01 09:53:08 Ready to write response ...
	
	
	==> kernel <==
	 09:53:10 up  1:35,  0 user,  load average: 1.03, 1.87, 2.77
	Linux addons-714840 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e] <==
	I1101 09:51:02.046801       1 main.go:301] handling current node
	I1101 09:51:12.040680       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:51:12.040805       1 main.go:301] handling current node
	I1101 09:51:22.045002       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:51:22.045112       1 main.go:301] handling current node
	I1101 09:51:32.046067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:51:32.046101       1 main.go:301] handling current node
	I1101 09:51:42.048843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:51:42.048991       1 main.go:301] handling current node
	I1101 09:51:52.046927       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:51:52.046962       1 main.go:301] handling current node
	I1101 09:52:02.044960       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:52:02.044994       1 main.go:301] handling current node
	I1101 09:52:12.046859       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:52:12.046963       1 main.go:301] handling current node
	I1101 09:52:22.046418       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:52:22.046452       1 main.go:301] handling current node
	I1101 09:52:32.049163       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:52:32.049291       1 main.go:301] handling current node
	I1101 09:52:42.048649       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:52:42.048686       1 main.go:301] handling current node
	I1101 09:52:52.045439       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:52:52.045473       1 main.go:301] handling current node
	I1101 09:53:02.048994       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:53:02.049036       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79] <==
	W1101 09:48:38.492496       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:48:38.519169       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 09:48:38.553925       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 09:48:38.586955       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:48:52.430888       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.105.141:443: connect: connection refused
	E1101 09:48:52.430940       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.105.141:443: connect: connection refused" logger="UnhandledError"
	W1101 09:48:52.431410       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.105.141:443: connect: connection refused
	E1101 09:48:52.435494       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.105.141:443: connect: connection refused" logger="UnhandledError"
	W1101 09:48:52.520162       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.105.141:443: connect: connection refused
	E1101 09:48:52.520271       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.105.141:443: connect: connection refused" logger="UnhandledError"
	W1101 09:49:07.603743       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 09:49:07.603818       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 09:49:07.604848       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.26.83:443: connect: connection refused" logger="UnhandledError"
	E1101 09:49:07.607755       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.26.83:443: connect: connection refused" logger="UnhandledError"
	E1101 09:49:07.610846       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.26.83:443: connect: connection refused" logger="UnhandledError"
	I1101 09:49:07.756987       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:50:12.804404       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54872: use of closed network connection
	I1101 09:50:49.162482       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 09:50:49.475065       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.29.119"}
	I1101 09:50:52.699170       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1101 09:50:54.249065       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1101 09:53:08.413860       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.117.21"}
	
	
	==> kube-controller-manager [17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd] <==
	I1101 09:48:08.520987       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:48:08.520995       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:48:08.530444       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-714840" podCIDRs=["10.244.0.0/24"]
	I1101 09:48:08.563944       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:48:08.564061       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:48:08.564167       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:48:08.564243       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-714840"
	I1101 09:48:08.564285       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:48:08.564321       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:48:08.564171       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:48:08.566192       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:48:08.566328       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:48:08.566912       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:48:08.569247       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:48:08.569279       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:48:08.569287       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1101 09:48:13.810570       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1101 09:48:38.476109       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:48:38.476274       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 09:48:38.476340       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:48:38.526040       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:48:38.538544       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 09:48:38.579374       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:48:38.639399       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:48:53.574641       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f] <==
	I1101 09:48:11.956541       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:48:12.068524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:48:12.191091       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:48:12.191131       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:48:12.191208       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:48:12.232336       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:48:12.232395       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:48:12.239325       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:48:12.239662       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:48:12.239680       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:48:12.248853       1 config.go:200] "Starting service config controller"
	I1101 09:48:12.248877       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:48:12.248894       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:48:12.248898       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:48:12.248915       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:48:12.248945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:48:12.249600       1 config.go:309] "Starting node config controller"
	I1101 09:48:12.249608       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:48:12.249618       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:48:12.351130       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:48:12.351174       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:48:12.351187       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce] <==
	I1101 09:48:01.633375       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:48:01.633418       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:48:01.636302       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1101 09:48:01.639789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:48:01.649406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:48:01.649657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:48:01.650675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:48:01.650808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:48:01.650920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:48:01.651046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:48:01.651690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:48:01.652795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:48:01.652953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:48:01.653502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:48:01.653559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:48:01.653657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:48:01.653657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:48:01.653791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:48:01.653847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:48:01.653893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:48:01.653920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:48:01.653998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:48:02.500646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:48:02.540755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1101 09:48:03.036608       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:51:13 addons-714840 kubelet[1270]: I1101 09:51:13.332577    1270 scope.go:117] "RemoveContainer" containerID="57e15e0fcda63b733eb691cd95d219a87019846ba09230cdc908d506494d7b1b"
	Nov 01 09:51:13 addons-714840 kubelet[1270]: E1101 09:51:13.333115    1270 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57e15e0fcda63b733eb691cd95d219a87019846ba09230cdc908d506494d7b1b\": container with ID starting with 57e15e0fcda63b733eb691cd95d219a87019846ba09230cdc908d506494d7b1b not found: ID does not exist" containerID="57e15e0fcda63b733eb691cd95d219a87019846ba09230cdc908d506494d7b1b"
	Nov 01 09:51:13 addons-714840 kubelet[1270]: I1101 09:51:13.333156    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e15e0fcda63b733eb691cd95d219a87019846ba09230cdc908d506494d7b1b"} err="failed to get container status \"57e15e0fcda63b733eb691cd95d219a87019846ba09230cdc908d506494d7b1b\": rpc error: code = NotFound desc = could not find container \"57e15e0fcda63b733eb691cd95d219a87019846ba09230cdc908d506494d7b1b\": container with ID starting with 57e15e0fcda63b733eb691cd95d219a87019846ba09230cdc908d506494d7b1b not found: ID does not exist"
	Nov 01 09:51:13 addons-714840 kubelet[1270]: I1101 09:51:13.349436    1270 reconciler_common.go:299] "Volume detached for volume \"pvc-10d41ac7-9fba-45f3-ab4c-a9b6748b58ce\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^486feac2-b708-11f0-90c2-9a85f095bbd6\") on node \"addons-714840\" DevicePath \"\""
	Nov 01 09:51:14 addons-714840 kubelet[1270]: I1101 09:51:14.256708    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52de1bbc-b903-4ea0-964e-a3598239ecdd" path="/var/lib/kubelet/pods/52de1bbc-b903-4ea0-964e-a3598239ecdd/volumes"
	Nov 01 09:51:50 addons-714840 kubelet[1270]: I1101 09:51:50.251088    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-w2s6j" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:52:05 addons-714840 kubelet[1270]: E1101 09:52:05.225337    1270 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio-56e131752337dafbbbf892c301cb72bc2b4d05b607ea965b27235d0235e12217\": RecentStats: unable to find data in memory cache]"
	Nov 01 09:52:05 addons-714840 kubelet[1270]: I1101 09:52:05.251535    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-2t6gg" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:52:11 addons-714840 kubelet[1270]: I1101 09:52:11.251413    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-czvz6" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:52:15 addons-714840 kubelet[1270]: E1101 09:52:15.270794    1270 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio-56e131752337dafbbbf892c301cb72bc2b4d05b607ea965b27235d0235e12217\": RecentStats: unable to find data in memory cache]"
	Nov 01 09:53:02 addons-714840 kubelet[1270]: I1101 09:53:02.752314    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bnkwh" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:53:02 addons-714840 kubelet[1270]: W1101 09:53:02.788071    1270 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9/crio-c832cabf8d455d076781b4866fef58624dbfb51564bdbca9caf4618b9a430714 WatchSource:0}: Error finding container c832cabf8d455d076781b4866fef58624dbfb51564bdbca9caf4618b9a430714: Status 404 returned error can't find the container with id c832cabf8d455d076781b4866fef58624dbfb51564bdbca9caf4618b9a430714
	Nov 01 09:53:04 addons-714840 kubelet[1270]: E1101 09:53:04.333367    1270 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fa76f94487c731d719cd28db001d5eb4c7a83e512a3366e1edd3c06e864bbdf3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fa76f94487c731d719cd28db001d5eb4c7a83e512a3366e1edd3c06e864bbdf3/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 09:53:04 addons-714840 kubelet[1270]: I1101 09:53:04.715968    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bnkwh" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:53:04 addons-714840 kubelet[1270]: I1101 09:53:04.716186    1270 scope.go:117] "RemoveContainer" containerID="b26dc3221238718c93e7db4fecf93355f41ffe616b82ca3e02e57b9a18c239de"
	Nov 01 09:53:05 addons-714840 kubelet[1270]: I1101 09:53:05.722660    1270 scope.go:117] "RemoveContainer" containerID="b26dc3221238718c93e7db4fecf93355f41ffe616b82ca3e02e57b9a18c239de"
	Nov 01 09:53:05 addons-714840 kubelet[1270]: I1101 09:53:05.722921    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bnkwh" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:53:05 addons-714840 kubelet[1270]: I1101 09:53:05.723622    1270 scope.go:117] "RemoveContainer" containerID="254cca6ba3a5cc089b4e3a09fb608e3b19d875e97c54a1a9912d96f1b2d07d77"
	Nov 01 09:53:05 addons-714840 kubelet[1270]: E1101 09:53:05.723871    1270 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-bnkwh_kube-system(4d74e2c4-c5a3-45e3-9a6e-e70783d9e315)\"" pod="kube-system/registry-creds-764b6fb674-bnkwh" podUID="4d74e2c4-c5a3-45e3-9a6e-e70783d9e315"
	Nov 01 09:53:06 addons-714840 kubelet[1270]: I1101 09:53:06.728430    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bnkwh" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:53:06 addons-714840 kubelet[1270]: I1101 09:53:06.728486    1270 scope.go:117] "RemoveContainer" containerID="254cca6ba3a5cc089b4e3a09fb608e3b19d875e97c54a1a9912d96f1b2d07d77"
	Nov 01 09:53:06 addons-714840 kubelet[1270]: E1101 09:53:06.728633    1270 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-bnkwh_kube-system(4d74e2c4-c5a3-45e3-9a6e-e70783d9e315)\"" pod="kube-system/registry-creds-764b6fb674-bnkwh" podUID="4d74e2c4-c5a3-45e3-9a6e-e70783d9e315"
	Nov 01 09:53:08 addons-714840 kubelet[1270]: I1101 09:53:08.450667    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7fj\" (UniqueName: \"kubernetes.io/projected/fac6ff5f-c753-4e44-bd48-c6dacc716485-kube-api-access-4w7fj\") pod \"hello-world-app-5d498dc89-6qr6f\" (UID: \"fac6ff5f-c753-4e44-bd48-c6dacc716485\") " pod="default/hello-world-app-5d498dc89-6qr6f"
	Nov 01 09:53:08 addons-714840 kubelet[1270]: I1101 09:53:08.451325    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fac6ff5f-c753-4e44-bd48-c6dacc716485-gcp-creds\") pod \"hello-world-app-5d498dc89-6qr6f\" (UID: \"fac6ff5f-c753-4e44-bd48-c6dacc716485\") " pod="default/hello-world-app-5d498dc89-6qr6f"
	Nov 01 09:53:08 addons-714840 kubelet[1270]: W1101 09:53:08.658561    1270 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9/crio-5143679c53627fd11eb154bb665da4ad679f66d43581cc50840e4c2780b6cc86 WatchSource:0}: Error finding container 5143679c53627fd11eb154bb665da4ad679f66d43581cc50840e4c2780b6cc86: Status 404 returned error can't find the container with id 5143679c53627fd11eb154bb665da4ad679f66d43581cc50840e4c2780b6cc86
	
	
	==> storage-provisioner [c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323] <==
	W1101 09:52:44.921928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:46.925047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:46.929686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:48.933379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:48.940025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:50.943690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:50.948223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:52.957190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:52.969119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:54.984880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:54.989721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:56.993352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:57.004887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:59.008335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:52:59.016343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:53:01.019154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:53:01.024035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:53:03.027475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:53:03.032213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:53:05.036457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:53:05.041274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:53:07.045181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:53:07.050032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:53:09.054763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:53:09.060005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-714840 -n addons-714840
helpers_test.go:269: (dbg) Run:  kubectl --context addons-714840 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-99jl2 ingress-nginx-admission-patch-8mgj2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-714840 describe pod ingress-nginx-admission-create-99jl2 ingress-nginx-admission-patch-8mgj2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-714840 describe pod ingress-nginx-admission-create-99jl2 ingress-nginx-admission-patch-8mgj2: exit status 1 (81.797238ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-99jl2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8mgj2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-714840 describe pod ingress-nginx-admission-create-99jl2 ingress-nginx-admission-patch-8mgj2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (311.420845ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:53:11.595614  304610 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:53:11.596473  304610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:53:11.596508  304610 out.go:374] Setting ErrFile to fd 2...
	I1101 09:53:11.596528  304610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:53:11.596836  304610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:53:11.597210  304610 mustload.go:66] Loading cluster: addons-714840
	I1101 09:53:11.597594  304610 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:53:11.597627  304610 addons.go:607] checking whether the cluster is paused
	I1101 09:53:11.597754  304610 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:53:11.597780  304610 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:53:11.598235  304610 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:53:11.630770  304610 ssh_runner.go:195] Run: systemctl --version
	I1101 09:53:11.630824  304610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:53:11.655681  304610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:53:11.774814  304610 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:53:11.774899  304610 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:53:11.811285  304610 cri.go:89] found id: "254cca6ba3a5cc089b4e3a09fb608e3b19d875e97c54a1a9912d96f1b2d07d77"
	I1101 09:53:11.811304  304610 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:53:11.811309  304610 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:53:11.811312  304610 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:53:11.811316  304610 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:53:11.811319  304610 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:53:11.811323  304610 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:53:11.811326  304610 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:53:11.811329  304610 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:53:11.811335  304610 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:53:11.811339  304610 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:53:11.811342  304610 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:53:11.811344  304610 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:53:11.811348  304610 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:53:11.811351  304610 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:53:11.811359  304610 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:53:11.811363  304610 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:53:11.811368  304610 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:53:11.811371  304610 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:53:11.811374  304610 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:53:11.811378  304610 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:53:11.811381  304610 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:53:11.811385  304610 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:53:11.811388  304610 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:53:11.811391  304610 cri.go:89] found id: ""
	I1101 09:53:11.811442  304610 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:53:11.827107  304610 out.go:203] 
	W1101 09:53:11.830017  304610 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:53:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:53:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:53:11.830052  304610 out.go:285] * 
	* 
	W1101 09:53:11.835068  304610 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:53:11.838109  304610 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable ingress --alsologtostderr -v=1: exit status 11 (258.765001ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:53:11.893718  304725 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:53:11.894443  304725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:53:11.894461  304725 out.go:374] Setting ErrFile to fd 2...
	I1101 09:53:11.894467  304725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:53:11.894893  304725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:53:11.895316  304725 mustload.go:66] Loading cluster: addons-714840
	I1101 09:53:11.896018  304725 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:53:11.896042  304725 addons.go:607] checking whether the cluster is paused
	I1101 09:53:11.896216  304725 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:53:11.896239  304725 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:53:11.897162  304725 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:53:11.914764  304725 ssh_runner.go:195] Run: systemctl --version
	I1101 09:53:11.914822  304725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:53:11.931787  304725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:53:12.035715  304725 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:53:12.035817  304725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:53:12.070383  304725 cri.go:89] found id: "254cca6ba3a5cc089b4e3a09fb608e3b19d875e97c54a1a9912d96f1b2d07d77"
	I1101 09:53:12.070407  304725 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:53:12.070412  304725 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:53:12.070416  304725 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:53:12.070420  304725 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:53:12.070424  304725 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:53:12.070428  304725 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:53:12.070431  304725 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:53:12.070434  304725 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:53:12.070442  304725 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:53:12.070445  304725 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:53:12.070449  304725 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:53:12.070452  304725 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:53:12.070455  304725 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:53:12.070458  304725 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:53:12.070463  304725 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:53:12.070466  304725 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:53:12.070471  304725 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:53:12.070474  304725 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:53:12.070477  304725 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:53:12.070481  304725 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:53:12.070484  304725 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:53:12.070487  304725 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:53:12.070491  304725 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:53:12.070498  304725 cri.go:89] found id: ""
	I1101 09:53:12.070557  304725 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:53:12.086370  304725 out.go:203] 
	W1101 09:53:12.089403  304725 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:53:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:53:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:53:12.089435  304725 out.go:285] * 
	* 
	W1101 09:53:12.094476  304725 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:53:12.097510  304725 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-lhntn" [a36c6fb5-9594-45e8-994d-8c2f4f6f73b1] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003411064s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (266.728087ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:50:48.609813  302674 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:50:48.610746  302674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:48.610763  302674 out.go:374] Setting ErrFile to fd 2...
	I1101 09:50:48.610770  302674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:48.611101  302674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:50:48.611485  302674 mustload.go:66] Loading cluster: addons-714840
	I1101 09:50:48.612008  302674 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:48.612030  302674 addons.go:607] checking whether the cluster is paused
	I1101 09:50:48.612189  302674 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:48.612208  302674 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:50:48.612708  302674 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:50:48.631443  302674 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:48.631504  302674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:50:48.650870  302674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:50:48.755937  302674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:48.756034  302674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:48.786688  302674 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:50:48.786712  302674 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:50:48.786718  302674 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:50:48.786722  302674 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:50:48.786726  302674 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:50:48.786730  302674 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:50:48.786734  302674 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:50:48.786737  302674 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:50:48.786740  302674 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:50:48.786751  302674 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:50:48.786755  302674 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:50:48.786758  302674 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:50:48.786762  302674 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:50:48.786766  302674 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:50:48.786774  302674 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:50:48.786785  302674 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:50:48.786799  302674 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:50:48.786804  302674 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:50:48.786807  302674 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:50:48.786811  302674 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:50:48.786816  302674 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:50:48.786819  302674 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:50:48.786822  302674 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:50:48.786825  302674 cri.go:89] found id: ""
	I1101 09:50:48.786881  302674 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:50:48.802187  302674 out.go:203] 
	W1101 09:50:48.805288  302674 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:50:48.805313  302674 out.go:285] * 
	* 
	W1101 09:50:48.810320  302674 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:50:48.813181  302674 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.380504ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005069322s
addons_test.go:463: (dbg) Run:  kubectl --context addons-714840 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (317.038632ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:50:42.297366  302476 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:50:42.298183  302476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:42.298238  302476 out.go:374] Setting ErrFile to fd 2...
	I1101 09:50:42.298261  302476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:42.298734  302476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:50:42.299192  302476 mustload.go:66] Loading cluster: addons-714840
	I1101 09:50:42.300113  302476 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:42.300192  302476 addons.go:607] checking whether the cluster is paused
	I1101 09:50:42.300499  302476 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:42.300550  302476 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:50:42.301546  302476 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:50:42.323317  302476 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:42.323376  302476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:50:42.349648  302476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:50:42.459770  302476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:42.459877  302476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:42.512525  302476 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:50:42.512558  302476 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:50:42.512563  302476 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:50:42.512567  302476 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:50:42.512571  302476 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:50:42.512575  302476 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:50:42.512578  302476 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:50:42.512581  302476 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:50:42.512584  302476 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:50:42.512593  302476 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:50:42.512596  302476 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:50:42.512600  302476 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:50:42.512603  302476 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:50:42.512606  302476 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:50:42.512625  302476 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:50:42.512638  302476 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:50:42.512642  302476 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:50:42.512647  302476 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:50:42.512650  302476 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:50:42.512652  302476 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:50:42.512657  302476 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:50:42.512667  302476 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:50:42.512670  302476 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:50:42.512673  302476 cri.go:89] found id: ""
	I1101 09:50:42.512729  302476 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:50:42.531002  302476 out.go:203] 
	W1101 09:50:42.533257  302476 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:50:42.533280  302476 out.go:285] * 
	* 
	W1101 09:50:42.538218  302476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:50:42.541150  302476 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.45s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 09:50:34.062557  294288 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 09:50:34.074597  294288 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 09:50:34.074634  294288 kapi.go:107] duration metric: took 12.095952ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 12.10689ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-714840 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-714840 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [1c8f80c8-6bea-4e49-9f81-e777d2283100] Pending
helpers_test.go:352: "task-pv-pod" [1c8f80c8-6bea-4e49-9f81-e777d2283100] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [1c8f80c8-6bea-4e49-9f81-e777d2283100] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004163724s
addons_test.go:572: (dbg) Run:  kubectl --context addons-714840 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-714840 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-714840 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-714840 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-714840 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-714840 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-714840 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [52de1bbc-b903-4ea0-964e-a3598239ecdd] Pending
helpers_test.go:352: "task-pv-pod-restore" [52de1bbc-b903-4ea0-964e-a3598239ecdd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [52de1bbc-b903-4ea0-964e-a3598239ecdd] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004414134s
addons_test.go:614: (dbg) Run:  kubectl --context addons-714840 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-714840 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-714840 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (278.941358ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:51:13.798674  303415 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:51:13.799572  303415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:51:13.799612  303415 out.go:374] Setting ErrFile to fd 2...
	I1101 09:51:13.799631  303415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:51:13.799924  303415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:51:13.800270  303415 mustload.go:66] Loading cluster: addons-714840
	I1101 09:51:13.800684  303415 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:51:13.800723  303415 addons.go:607] checking whether the cluster is paused
	I1101 09:51:13.800870  303415 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:51:13.800904  303415 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:51:13.801419  303415 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:51:13.819412  303415 ssh_runner.go:195] Run: systemctl --version
	I1101 09:51:13.819494  303415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:51:13.838338  303415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:51:13.947800  303415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:51:13.947886  303415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:51:13.987465  303415 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:51:13.987500  303415 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:51:13.987506  303415 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:51:13.987510  303415 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:51:13.987514  303415 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:51:13.987518  303415 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:51:13.987521  303415 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:51:13.987524  303415 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:51:13.987527  303415 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:51:13.987534  303415 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:51:13.987537  303415 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:51:13.987541  303415 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:51:13.987544  303415 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:51:13.987548  303415 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:51:13.987551  303415 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:51:13.987563  303415 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:51:13.987567  303415 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:51:13.987570  303415 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:51:13.987573  303415 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:51:13.987577  303415 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:51:13.987581  303415 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:51:13.987584  303415 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:51:13.987587  303415 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:51:13.987591  303415 cri.go:89] found id: ""
	I1101 09:51:13.987645  303415 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:51:14.004571  303415 out.go:203] 
	W1101 09:51:14.007695  303415 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:51:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:51:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:51:14.007727  303415 out.go:285] * 
	* 
	W1101 09:51:14.013225  303415 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:51:14.016442  303415 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (283.814322ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:51:14.090019  303476 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:51:14.090780  303476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:51:14.090814  303476 out.go:374] Setting ErrFile to fd 2...
	I1101 09:51:14.090833  303476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:51:14.091128  303476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:51:14.091463  303476 mustload.go:66] Loading cluster: addons-714840
	I1101 09:51:14.091872  303476 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:51:14.091910  303476 addons.go:607] checking whether the cluster is paused
	I1101 09:51:14.092043  303476 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:51:14.092070  303476 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:51:14.092548  303476 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:51:14.109868  303476 ssh_runner.go:195] Run: systemctl --version
	I1101 09:51:14.109938  303476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:51:14.126653  303476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:51:14.231465  303476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:51:14.231567  303476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:51:14.273627  303476 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:51:14.273645  303476 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:51:14.273650  303476 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:51:14.273653  303476 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:51:14.273656  303476 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:51:14.273660  303476 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:51:14.273663  303476 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:51:14.273666  303476 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:51:14.273669  303476 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:51:14.273677  303476 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:51:14.273681  303476 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:51:14.273684  303476 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:51:14.273687  303476 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:51:14.273690  303476 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:51:14.273693  303476 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:51:14.273697  303476 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:51:14.273701  303476 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:51:14.273707  303476 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:51:14.273711  303476 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:51:14.273713  303476 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:51:14.273718  303476 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:51:14.273721  303476 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:51:14.273724  303476 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:51:14.273727  303476 cri.go:89] found id: ""
	I1101 09:51:14.273778  303476 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:51:14.289451  303476 out.go:203] 
	W1101 09:51:14.292277  303476 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:51:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:51:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:51:14.292309  303476 out.go:285] * 
	* 
	W1101 09:51:14.297503  303476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:51:14.300486  303476 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (40.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-714840 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-714840 --alsologtostderr -v=1: exit status 11 (300.963715ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:50:33.476009  301785 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:50:33.476802  301785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:33.476819  301785 out.go:374] Setting ErrFile to fd 2...
	I1101 09:50:33.476825  301785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:33.477149  301785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:50:33.477509  301785 mustload.go:66] Loading cluster: addons-714840
	I1101 09:50:33.477890  301785 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:33.477941  301785 addons.go:607] checking whether the cluster is paused
	I1101 09:50:33.478071  301785 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:33.478087  301785 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:50:33.478537  301785 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:50:33.499492  301785 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:33.499554  301785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:50:33.521740  301785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:50:33.635908  301785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:33.635997  301785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:33.665332  301785 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:50:33.665354  301785 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:50:33.665363  301785 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:50:33.665368  301785 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:50:33.665371  301785 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:50:33.665375  301785 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:50:33.665379  301785 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:50:33.665384  301785 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:50:33.665387  301785 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:50:33.665394  301785 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:50:33.665397  301785 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:50:33.665401  301785 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:50:33.665404  301785 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:50:33.665408  301785 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:50:33.665418  301785 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:50:33.665423  301785 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:50:33.665427  301785 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:50:33.665430  301785 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:50:33.665433  301785 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:50:33.665436  301785 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:50:33.665442  301785 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:50:33.665449  301785 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:50:33.665453  301785 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:50:33.665456  301785 cri.go:89] found id: ""
	I1101 09:50:33.665520  301785 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:50:33.693669  301785 out.go:203] 
	W1101 09:50:33.696668  301785 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:50:33.696702  301785 out.go:285] * 
	* 
	W1101 09:50:33.701806  301785 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:50:33.705172  301785 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-714840 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-714840
helpers_test.go:243: (dbg) docker inspect addons-714840:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9",
	        "Created": "2025-11-01T09:47:37.747589113Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295447,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:47:37.814855295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9/hosts",
	        "LogPath": "/var/lib/docker/containers/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9/c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9-json.log",
	        "Name": "/addons-714840",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-714840:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-714840",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c1f1da656a11fb50c6197a4f316b1ee6bb50e81500b7e39f6d7ae8e703d012b9",
	                "LowerDir": "/var/lib/docker/overlay2/9c5c3fba1b0d3deba2fb576c1d6bb043473ad67ea28e7e64bc49e52c6f90d1bd-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c5c3fba1b0d3deba2fb576c1d6bb043473ad67ea28e7e64bc49e52c6f90d1bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c5c3fba1b0d3deba2fb576c1d6bb043473ad67ea28e7e64bc49e52c6f90d1bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c5c3fba1b0d3deba2fb576c1d6bb043473ad67ea28e7e64bc49e52c6f90d1bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-714840",
	                "Source": "/var/lib/docker/volumes/addons-714840/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-714840",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-714840",
	                "name.minikube.sigs.k8s.io": "addons-714840",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "90e19efb3514b7358870171644e8ede39b8886462b9c8dbc3f7fdc64179a3377",
	            "SandboxKey": "/var/run/docker/netns/90e19efb3514",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-714840": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:08:94:61:e5:35",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2ec2e4bdf07ebc49b6f3f28ea34af4ab99e24d4d2a098b7e81e52c59c2b45c0b",
	                    "EndpointID": "eec3bf0f47089c814107edfacd628d11abf1c24a2434396378e83c340232aa69",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-714840",
	                        "c1f1da656a11"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-714840 -n addons-714840
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-714840 logs -n 25: (1.578497876s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-633552 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-633552   │ jenkins │ v1.37.0 │ 01 Nov 25 09:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ delete  │ -p download-only-633552                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-633552   │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ start   │ -o=json --download-only -p download-only-046639 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-046639   │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ delete  │ -p download-only-046639                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-046639   │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ delete  │ -p download-only-633552                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-633552   │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ delete  │ -p download-only-046639                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-046639   │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ start   │ --download-only -p download-docker-896540 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-896540 │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │                     │
	│ delete  │ -p download-docker-896540                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-896540 │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ start   │ --download-only -p binary-mirror-569786 --alsologtostderr --binary-mirror http://127.0.0.1:45357 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-569786   │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │                     │
	│ delete  │ -p binary-mirror-569786                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-569786   │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ addons  │ disable dashboard -p addons-714840                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │                     │
	│ addons  │ enable dashboard -p addons-714840                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │                     │
	│ start   │ -p addons-714840 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:50 UTC │
	│ addons  │ addons-714840 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ ip      │ addons-714840 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │ 01 Nov 25 09:50 UTC │
	│ addons  │ addons-714840 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ ssh     │ addons-714840 ssh cat /opt/local-path-provisioner/pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │ 01 Nov 25 09:50 UTC │
	│ addons  │ addons-714840 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ enable headlamp -p addons-714840 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	│ addons  │ addons-714840 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-714840          │ jenkins │ v1.37.0 │ 01 Nov 25 09:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:47:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:47:12.843619  295049 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:47:12.843986  295049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:47:12.844028  295049 out.go:374] Setting ErrFile to fd 2...
	I1101 09:47:12.844049  295049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:47:12.844355  295049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:47:12.844964  295049 out.go:368] Setting JSON to false
	I1101 09:47:12.845825  295049 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5385,"bootTime":1761985048,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 09:47:12.845936  295049 start.go:143] virtualization:  
	I1101 09:47:12.849391  295049 out.go:179] * [addons-714840] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:47:12.852405  295049 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:47:12.852488  295049 notify.go:221] Checking for updates...
	I1101 09:47:12.858191  295049 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:47:12.861252  295049 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 09:47:12.864181  295049 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 09:47:12.867035  295049 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:47:12.870023  295049 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:47:12.873330  295049 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:47:12.895609  295049 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:47:12.895733  295049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:47:12.956456  295049 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 09:47:12.947501916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:47:12.956564  295049 docker.go:319] overlay module found
	I1101 09:47:12.959668  295049 out.go:179] * Using the docker driver based on user configuration
	I1101 09:47:12.962515  295049 start.go:309] selected driver: docker
	I1101 09:47:12.962535  295049 start.go:930] validating driver "docker" against <nil>
	I1101 09:47:12.962561  295049 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:47:12.963306  295049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:47:13.019029  295049 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 09:47:13.009972034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:47:13.019199  295049 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:47:13.019440  295049 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:47:13.022344  295049 out.go:179] * Using Docker driver with root privileges
	I1101 09:47:13.025208  295049 cni.go:84] Creating CNI manager for ""
	I1101 09:47:13.025281  295049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:47:13.025290  295049 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:47:13.025371  295049 start.go:353] cluster config:
	{Name:addons-714840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-714840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 09:47:13.028458  295049 out.go:179] * Starting "addons-714840" primary control-plane node in "addons-714840" cluster
	I1101 09:47:13.031254  295049 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:47:13.034136  295049 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:47:13.037013  295049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:47:13.037076  295049 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:47:13.037090  295049 cache.go:59] Caching tarball of preloaded images
	I1101 09:47:13.037104  295049 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:47:13.037184  295049 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:47:13.037194  295049 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:47:13.037534  295049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/config.json ...
	I1101 09:47:13.037554  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/config.json: {Name:mked4fc3681e07235fb3e32952c51287c293d99b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:13.053141  295049 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:47:13.053271  295049 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:47:13.053296  295049 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 09:47:13.053303  295049 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 09:47:13.053315  295049 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 09:47:13.053321  295049 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 09:47:30.907072  295049 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 09:47:30.907112  295049 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:47:30.907143  295049 start.go:360] acquireMachinesLock for addons-714840: {Name:mkf6ac0e8c3fba79ae7fc6678b78aa6e902dfc1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:47:30.907268  295049 start.go:364] duration metric: took 99.16µs to acquireMachinesLock for "addons-714840"
	I1101 09:47:30.907300  295049 start.go:93] Provisioning new machine with config: &{Name:addons-714840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-714840 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:47:30.907390  295049 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:47:30.911013  295049 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 09:47:30.911267  295049 start.go:159] libmachine.API.Create for "addons-714840" (driver="docker")
	I1101 09:47:30.911305  295049 client.go:173] LocalClient.Create starting
	I1101 09:47:30.911437  295049 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem
	I1101 09:47:30.952863  295049 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem
	I1101 09:47:31.018704  295049 cli_runner.go:164] Run: docker network inspect addons-714840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:47:31.035218  295049 cli_runner.go:211] docker network inspect addons-714840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:47:31.035316  295049 network_create.go:284] running [docker network inspect addons-714840] to gather additional debugging logs...
	I1101 09:47:31.035340  295049 cli_runner.go:164] Run: docker network inspect addons-714840
	W1101 09:47:31.050814  295049 cli_runner.go:211] docker network inspect addons-714840 returned with exit code 1
	I1101 09:47:31.050847  295049 network_create.go:287] error running [docker network inspect addons-714840]: docker network inspect addons-714840: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-714840 not found
	I1101 09:47:31.050863  295049 network_create.go:289] output of [docker network inspect addons-714840]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-714840 not found
	
	** /stderr **
	I1101 09:47:31.051026  295049 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:47:31.069387  295049 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1f510}
	I1101 09:47:31.069433  295049 network_create.go:124] attempt to create docker network addons-714840 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 09:47:31.069510  295049 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-714840 addons-714840
	I1101 09:47:31.131436  295049 network_create.go:108] docker network addons-714840 192.168.49.0/24 created
	I1101 09:47:31.131473  295049 kic.go:121] calculated static IP "192.168.49.2" for the "addons-714840" container
	I1101 09:47:31.131558  295049 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:47:31.148143  295049 cli_runner.go:164] Run: docker volume create addons-714840 --label name.minikube.sigs.k8s.io=addons-714840 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:47:31.166811  295049 oci.go:103] Successfully created a docker volume addons-714840
	I1101 09:47:31.166895  295049 cli_runner.go:164] Run: docker run --rm --name addons-714840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-714840 --entrypoint /usr/bin/test -v addons-714840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:47:33.253677  295049 cli_runner.go:217] Completed: docker run --rm --name addons-714840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-714840 --entrypoint /usr/bin/test -v addons-714840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.086732552s)
	I1101 09:47:33.253704  295049 oci.go:107] Successfully prepared a docker volume addons-714840
	I1101 09:47:33.253742  295049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:47:33.253760  295049 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:47:33.253826  295049 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-714840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:47:37.681114  295049 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-714840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.427253897s)
	I1101 09:47:37.681145  295049 kic.go:203] duration metric: took 4.427381669s to extract preloaded images to volume ...
	W1101 09:47:37.681310  295049 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:47:37.681429  295049 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:47:37.732870  295049 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-714840 --name addons-714840 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-714840 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-714840 --network addons-714840 --ip 192.168.49.2 --volume addons-714840:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:47:38.072459  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Running}}
	I1101 09:47:38.098960  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:47:38.124360  295049 cli_runner.go:164] Run: docker exec addons-714840 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:47:38.178949  295049 oci.go:144] the created container "addons-714840" has a running status.
	I1101 09:47:38.178980  295049 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa...
	I1101 09:47:38.328327  295049 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:47:38.354866  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:47:38.382112  295049 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:47:38.382139  295049 kic_runner.go:114] Args: [docker exec --privileged addons-714840 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:47:38.441756  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:47:38.462688  295049 machine.go:94] provisionDockerMachine start ...
	I1101 09:47:38.462797  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:38.491698  295049 main.go:143] libmachine: Using SSH client type: native
	I1101 09:47:38.492313  295049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1101 09:47:38.492334  295049 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:47:38.493125  295049 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 09:47:41.640599  295049 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-714840
	
	I1101 09:47:41.640625  295049 ubuntu.go:182] provisioning hostname "addons-714840"
	I1101 09:47:41.640696  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:41.657616  295049 main.go:143] libmachine: Using SSH client type: native
	I1101 09:47:41.657930  295049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1101 09:47:41.657948  295049 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-714840 && echo "addons-714840" | sudo tee /etc/hostname
	I1101 09:47:41.814474  295049 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-714840
	
	I1101 09:47:41.814575  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:41.831973  295049 main.go:143] libmachine: Using SSH client type: native
	I1101 09:47:41.832293  295049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1101 09:47:41.832315  295049 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-714840' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-714840/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-714840' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:47:41.981105  295049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:47:41.981129  295049 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 09:47:41.981162  295049 ubuntu.go:190] setting up certificates
	I1101 09:47:41.981171  295049 provision.go:84] configureAuth start
	I1101 09:47:41.981232  295049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-714840
	I1101 09:47:41.998738  295049 provision.go:143] copyHostCerts
	I1101 09:47:41.998842  295049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 09:47:41.999005  295049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 09:47:41.999068  295049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 09:47:41.999116  295049 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.addons-714840 san=[127.0.0.1 192.168.49.2 addons-714840 localhost minikube]
	I1101 09:47:42.358004  295049 provision.go:177] copyRemoteCerts
	I1101 09:47:42.358077  295049 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:47:42.358128  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:42.375834  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:47:42.480741  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:47:42.498439  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:47:42.516375  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:47:42.533368  295049 provision.go:87] duration metric: took 552.183791ms to configureAuth
	I1101 09:47:42.533392  295049 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:47:42.533592  295049 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:47:42.533691  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:42.550392  295049 main.go:143] libmachine: Using SSH client type: native
	I1101 09:47:42.550694  295049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1101 09:47:42.550713  295049 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:47:42.803021  295049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:47:42.803041  295049 machine.go:97] duration metric: took 4.340330275s to provisionDockerMachine
	I1101 09:47:42.803059  295049 client.go:176] duration metric: took 11.891741885s to LocalClient.Create
	I1101 09:47:42.803074  295049 start.go:167] duration metric: took 11.891808668s to libmachine.API.Create "addons-714840"
	I1101 09:47:42.803081  295049 start.go:293] postStartSetup for "addons-714840" (driver="docker")
	I1101 09:47:42.803091  295049 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:47:42.803166  295049 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:47:42.803210  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:42.822660  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:47:42.929263  295049 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:47:42.932787  295049 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:47:42.932814  295049 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:47:42.932844  295049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 09:47:42.932942  295049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 09:47:42.932972  295049 start.go:296] duration metric: took 129.88523ms for postStartSetup
	I1101 09:47:42.933293  295049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-714840
	I1101 09:47:42.950115  295049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/config.json ...
	I1101 09:47:42.950418  295049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:47:42.950466  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:42.973178  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:47:43.074196  295049 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:47:43.079411  295049 start.go:128] duration metric: took 12.172006109s to createHost
	I1101 09:47:43.079486  295049 start.go:83] releasing machines lock for "addons-714840", held for 12.17220273s
	I1101 09:47:43.079585  295049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-714840
	I1101 09:47:43.096394  295049 ssh_runner.go:195] Run: cat /version.json
	I1101 09:47:43.096453  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:43.096725  295049 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:47:43.096777  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:47:43.115693  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:47:43.126664  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:47:43.310260  295049 ssh_runner.go:195] Run: systemctl --version
	I1101 09:47:43.316791  295049 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:47:43.352328  295049 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:47:43.356845  295049 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:47:43.356916  295049 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:47:43.386072  295049 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:47:43.386094  295049 start.go:496] detecting cgroup driver to use...
	I1101 09:47:43.386128  295049 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:47:43.386196  295049 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:47:43.402430  295049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:47:43.414931  295049 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:47:43.415013  295049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:47:43.432744  295049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:47:43.452421  295049 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:47:43.561767  295049 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:47:43.677294  295049 docker.go:234] disabling docker service ...
	I1101 09:47:43.677400  295049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:47:43.698153  295049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:47:43.711540  295049 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:47:43.824402  295049 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:47:43.951039  295049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:47:43.964503  295049 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:47:43.978637  295049 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:47:43.978733  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:43.987800  295049 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:47:43.987904  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.004498  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.014302  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.023697  295049 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:47:44.032018  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.041315  295049 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.055402  295049 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:47:44.065621  295049 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:47:44.073724  295049 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:47:44.081543  295049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:47:44.194143  295049 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:47:44.316208  295049 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:47:44.316294  295049 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:47:44.320708  295049 start.go:564] Will wait 60s for crictl version
	I1101 09:47:44.320769  295049 ssh_runner.go:195] Run: which crictl
	I1101 09:47:44.324613  295049 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:47:44.351022  295049 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:47:44.351129  295049 ssh_runner.go:195] Run: crio --version
	I1101 09:47:44.379256  295049 ssh_runner.go:195] Run: crio --version
	I1101 09:47:44.410704  295049 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:47:44.413478  295049 cli_runner.go:164] Run: docker network inspect addons-714840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:47:44.429137  295049 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:47:44.432885  295049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:47:44.442394  295049 kubeadm.go:884] updating cluster {Name:addons-714840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-714840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:47:44.442503  295049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:47:44.442565  295049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:47:44.473799  295049 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:47:44.473827  295049 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:47:44.473883  295049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:47:44.500832  295049 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:47:44.500855  295049 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:47:44.500864  295049 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:47:44.500965  295049 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-714840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-714840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:47:44.501058  295049 ssh_runner.go:195] Run: crio config
	I1101 09:47:44.571500  295049 cni.go:84] Creating CNI manager for ""
	I1101 09:47:44.571543  295049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:47:44.571569  295049 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:47:44.571596  295049 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-714840 NodeName:addons-714840 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:47:44.571729  295049 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-714840"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:47:44.571801  295049 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:47:44.580050  295049 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:47:44.580165  295049 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:47:44.587602  295049 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:47:44.601075  295049 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:47:44.615318  295049 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1101 09:47:44.627927  295049 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:47:44.631492  295049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:47:44.641114  295049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:47:44.754117  295049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:47:44.769431  295049 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840 for IP: 192.168.49.2
	I1101 09:47:44.769501  295049 certs.go:195] generating shared ca certs ...
	I1101 09:47:44.769532  295049 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:44.769715  295049 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 09:47:45.855651  295049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt ...
	I1101 09:47:45.855691  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt: {Name:mk4cf6468ef14d02cbd92410cd4782247383e44b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:45.855900  295049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key ...
	I1101 09:47:45.855915  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key: {Name:mkbb72774e975f12896558de8f15660fe435c737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:45.856001  295049 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 09:47:46.438280  295049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt ...
	I1101 09:47:46.438312  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt: {Name:mk75fe7abee7e2bf689341d7fc63412ff1c56ad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:46.438488  295049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key ...
	I1101 09:47:46.438503  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key: {Name:mk0d554276ebbdf56caa33fbbdc37d214891a71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:46.438573  295049 certs.go:257] generating profile certs ...
	I1101 09:47:46.438638  295049 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.key
	I1101 09:47:46.438655  295049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt with IP's: []
	I1101 09:47:46.518500  295049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt ...
	I1101 09:47:46.518540  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: {Name:mk008a7fb412a8f7e0c037aa79a6e080994e63fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:46.518715  295049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.key ...
	I1101 09:47:46.518728  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.key: {Name:mkfbf8eb870384e4f6262a0b3a26653a945b8813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:46.518810  295049 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key.02841626
	I1101 09:47:46.518832  295049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt.02841626 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 09:47:47.164193  295049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt.02841626 ...
	I1101 09:47:47.164222  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt.02841626: {Name:mkbc2926a9f0443507812bd0cf620bed953ae434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:47.164396  295049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key.02841626 ...
	I1101 09:47:47.164410  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key.02841626: {Name:mk80c643fc96b5dd18d1f8a9eb5979373c38a755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:47.164494  295049 certs.go:382] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt.02841626 -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt
	I1101 09:47:47.164572  295049 certs.go:386] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key.02841626 -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key
	I1101 09:47:47.164623  295049 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.key
	I1101 09:47:47.164647  295049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.crt with IP's: []
	I1101 09:47:47.532521  295049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.crt ...
	I1101 09:47:47.532551  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.crt: {Name:mkf740531a1e7849e21aa37a19c12549fd5957b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:47.533333  295049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.key ...
	I1101 09:47:47.533357  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.key: {Name:mk84bc1536779125bb5db632c9430f67362944bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:47.533571  295049 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:47:47.533617  295049 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:47:47.533650  295049 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:47:47.533709  295049 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 09:47:47.534266  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:47:47.552660  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:47:47.571578  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:47:47.589590  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:47:47.609794  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:47:47.628330  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:47:47.646513  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:47:47.664295  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:47:47.682097  295049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:47:47.700683  295049 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:47:47.713668  295049 ssh_runner.go:195] Run: openssl version
	I1101 09:47:47.720004  295049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:47:47.728521  295049 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:47:47.732485  295049 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:47:47.732578  295049 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:47:47.773845  295049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:47:47.782710  295049 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:47:47.786446  295049 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:47:47.786517  295049 kubeadm.go:401] StartCluster: {Name:addons-714840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-714840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:47:47.786613  295049 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:47:47.786690  295049 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:47:47.814891  295049 cri.go:89] found id: ""
	I1101 09:47:47.815028  295049 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:47:47.823055  295049 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:47:47.831265  295049 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:47:47.831386  295049 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:47:47.839816  295049 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:47:47.839838  295049 kubeadm.go:158] found existing configuration files:
	
	I1101 09:47:47.839914  295049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:47:47.847920  295049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:47:47.847985  295049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:47:47.855765  295049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:47:47.863715  295049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:47:47.863833  295049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:47:47.871351  295049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:47:47.879379  295049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:47:47.879465  295049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:47:47.886918  295049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:47:47.897236  295049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:47:47.897306  295049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:47:47.906040  295049 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:47:47.961969  295049 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:47:47.962034  295049 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:47:47.986338  295049 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:47:47.986449  295049 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:47:47.986512  295049 kubeadm.go:319] OS: Linux
	I1101 09:47:47.986584  295049 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:47:47.986658  295049 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:47:47.986750  295049 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:47:47.986841  295049 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:47:47.986919  295049 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:47:47.987021  295049 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:47:47.987091  295049 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:47:47.987160  295049 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:47:47.987242  295049 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:47:48.066454  295049 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:47:48.066613  295049 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:47:48.066744  295049 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:47:48.077019  295049 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:47:48.080236  295049 out.go:252]   - Generating certificates and keys ...
	I1101 09:47:48.080426  295049 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:47:48.080521  295049 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:47:48.736193  295049 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:47:49.011011  295049 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:47:49.603343  295049 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:47:49.673166  295049 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:47:49.739268  295049 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:47:49.739640  295049 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-714840 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:47:50.696305  295049 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:47:50.696668  295049 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-714840 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:47:51.329965  295049 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:47:51.647035  295049 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:47:52.274847  295049 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:47:52.275237  295049 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:47:52.698810  295049 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:47:52.820002  295049 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:47:53.883978  295049 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:47:54.947925  295049 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:47:55.190971  295049 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:47:55.191515  295049 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:47:55.194217  295049 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:47:55.197599  295049 out.go:252]   - Booting up control plane ...
	I1101 09:47:55.197732  295049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:47:55.198170  295049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:47:55.199615  295049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:47:55.217423  295049 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:47:55.217769  295049 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:47:55.226601  295049 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:47:55.227302  295049 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:47:55.227600  295049 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:47:55.379132  295049 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:47:55.379258  295049 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:47:56.379897  295049 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000920948s
	I1101 09:47:56.383513  295049 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:47:56.383622  295049 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 09:47:56.383716  295049 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:47:56.383797  295049 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:47:58.905652  295049 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.521514249s
	I1101 09:48:01.646009  295049 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.262539265s
	I1101 09:48:03.385245  295049 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001425626s
	I1101 09:48:03.405594  295049 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:48:03.419942  295049 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:48:03.434806  295049 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:48:03.435072  295049 kubeadm.go:319] [mark-control-plane] Marking the node addons-714840 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:48:03.453110  295049 kubeadm.go:319] [bootstrap-token] Using token: 4hiyw7.npwciemn6akdakal
	I1101 09:48:03.458027  295049 out.go:252]   - Configuring RBAC rules ...
	I1101 09:48:03.458155  295049 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:48:03.464703  295049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:48:03.475357  295049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:48:03.480681  295049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:48:03.489354  295049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:48:03.494012  295049 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:48:03.794316  295049 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:48:04.221962  295049 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:48:04.791512  295049 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:48:04.792686  295049 kubeadm.go:319] 
	I1101 09:48:04.792793  295049 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:48:04.792819  295049 kubeadm.go:319] 
	I1101 09:48:04.792902  295049 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:48:04.792912  295049 kubeadm.go:319] 
	I1101 09:48:04.792979  295049 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:48:04.793052  295049 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:48:04.793110  295049 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:48:04.793119  295049 kubeadm.go:319] 
	I1101 09:48:04.793175  295049 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:48:04.793184  295049 kubeadm.go:319] 
	I1101 09:48:04.793234  295049 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:48:04.793242  295049 kubeadm.go:319] 
	I1101 09:48:04.793296  295049 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:48:04.793378  295049 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:48:04.793453  295049 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:48:04.793461  295049 kubeadm.go:319] 
	I1101 09:48:04.793549  295049 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:48:04.793639  295049 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:48:04.793665  295049 kubeadm.go:319] 
	I1101 09:48:04.793759  295049 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4hiyw7.npwciemn6akdakal \
	I1101 09:48:04.793874  295049 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 \
	I1101 09:48:04.793904  295049 kubeadm.go:319] 	--control-plane 
	I1101 09:48:04.793917  295049 kubeadm.go:319] 
	I1101 09:48:04.794006  295049 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:48:04.794016  295049 kubeadm.go:319] 
	I1101 09:48:04.794101  295049 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4hiyw7.npwciemn6akdakal \
	I1101 09:48:04.794212  295049 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 
	I1101 09:48:04.797787  295049 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 09:48:04.798041  295049 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 09:48:04.798197  295049 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:48:04.798234  295049 cni.go:84] Creating CNI manager for ""
	I1101 09:48:04.798254  295049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:48:04.801452  295049 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:48:04.805266  295049 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:48:04.809243  295049 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:48:04.809265  295049 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:48:04.822642  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:48:05.096639  295049 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:48:05.096734  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:05.096783  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-714840 minikube.k8s.io/updated_at=2025_11_01T09_48_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=addons-714840 minikube.k8s.io/primary=true
	I1101 09:48:05.228861  295049 ops.go:34] apiserver oom_adj: -16
	I1101 09:48:05.228992  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:05.729243  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:06.229066  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:06.729619  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:07.230028  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:07.729086  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:08.230112  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:08.729778  295049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:48:08.825841  295049 kubeadm.go:1114] duration metric: took 3.729158989s to wait for elevateKubeSystemPrivileges
	I1101 09:48:08.825873  295049 kubeadm.go:403] duration metric: took 21.039379784s to StartCluster
	I1101 09:48:08.825892  295049 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:48:08.826003  295049 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 09:48:08.826397  295049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:48:08.826601  295049 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:48:08.826750  295049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:48:08.827038  295049 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:48:08.827077  295049 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:48:08.827159  295049 addons.go:70] Setting yakd=true in profile "addons-714840"
	I1101 09:48:08.827179  295049 addons.go:239] Setting addon yakd=true in "addons-714840"
	I1101 09:48:08.827203  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.827706  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.828069  295049 addons.go:70] Setting metrics-server=true in profile "addons-714840"
	I1101 09:48:08.828092  295049 addons.go:239] Setting addon metrics-server=true in "addons-714840"
	I1101 09:48:08.828118  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.828560  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.828717  295049 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-714840"
	I1101 09:48:08.828734  295049 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-714840"
	I1101 09:48:08.828754  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.829169  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.832100  295049 addons.go:70] Setting registry=true in profile "addons-714840"
	I1101 09:48:08.832138  295049 addons.go:239] Setting addon registry=true in "addons-714840"
	I1101 09:48:08.832173  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.832687  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.832844  295049 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-714840"
	I1101 09:48:08.832883  295049 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-714840"
	I1101 09:48:08.832956  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.834213  295049 addons.go:70] Setting registry-creds=true in profile "addons-714840"
	I1101 09:48:08.834244  295049 addons.go:239] Setting addon registry-creds=true in "addons-714840"
	I1101 09:48:08.834278  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.834592  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.834683  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.852848  295049 addons.go:70] Setting cloud-spanner=true in profile "addons-714840"
	I1101 09:48:08.852964  295049 addons.go:239] Setting addon cloud-spanner=true in "addons-714840"
	I1101 09:48:08.853033  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.853558  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.856096  295049 addons.go:70] Setting storage-provisioner=true in profile "addons-714840"
	I1101 09:48:08.856175  295049 addons.go:239] Setting addon storage-provisioner=true in "addons-714840"
	I1101 09:48:08.856242  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.856854  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.873281  295049 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-714840"
	I1101 09:48:08.873313  295049 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-714840"
	I1101 09:48:08.873339  295049 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-714840"
	I1101 09:48:08.873351  295049 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-714840"
	I1101 09:48:08.873379  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.873667  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.873815  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.883201  295049 addons.go:70] Setting default-storageclass=true in profile "addons-714840"
	I1101 09:48:08.883241  295049 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-714840"
	I1101 09:48:08.883610  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.887780  295049 addons.go:70] Setting volcano=true in profile "addons-714840"
	I1101 09:48:08.887823  295049 addons.go:239] Setting addon volcano=true in "addons-714840"
	I1101 09:48:08.887861  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.888825  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.899697  295049 addons.go:70] Setting gcp-auth=true in profile "addons-714840"
	I1101 09:48:08.899740  295049 mustload.go:66] Loading cluster: addons-714840
	I1101 09:48:08.899972  295049 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:48:08.900230  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.905014  295049 addons.go:70] Setting volumesnapshots=true in profile "addons-714840"
	I1101 09:48:08.905054  295049 addons.go:239] Setting addon volumesnapshots=true in "addons-714840"
	I1101 09:48:08.905091  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.905570  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.907383  295049 out.go:179] * Verifying Kubernetes components...
	I1101 09:48:08.927577  295049 addons.go:70] Setting ingress=true in profile "addons-714840"
	I1101 09:48:08.927614  295049 addons.go:239] Setting addon ingress=true in "addons-714840"
	I1101 09:48:08.927663  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.928144  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.949201  295049 addons.go:70] Setting ingress-dns=true in profile "addons-714840"
	I1101 09:48:08.949239  295049 addons.go:239] Setting addon ingress-dns=true in "addons-714840"
	I1101 09:48:08.949282  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.949779  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.977468  295049 addons.go:70] Setting inspektor-gadget=true in profile "addons-714840"
	I1101 09:48:08.977501  295049 addons.go:239] Setting addon inspektor-gadget=true in "addons-714840"
	I1101 09:48:08.977538  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:08.977996  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:08.981794  295049 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:48:08.987024  295049 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:48:08.995032  295049 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:48:08.995112  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:48:08.995221  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.004518  295049 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:48:09.007494  295049 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:48:09.007519  295049 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:48:09.007590  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.020937  295049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:48:09.021638  295049 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 09:48:09.024786  295049 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-714840"
	I1101 09:48:09.024903  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:09.025939  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:09.044639  295049 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:48:09.045266  295049 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:48:09.052816  295049 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:48:09.056069  295049 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:48:09.056191  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.053648  295049 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:48:09.085180  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:48:09.087143  295049 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:48:09.087343  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:48:09.087658  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.093629  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:48:09.055823  295049 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:48:09.094765  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:48:09.094844  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.055850  295049 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:48:09.114208  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:48:09.114280  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.125699  295049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:48:09.126252  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:48:09.129840  295049 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1101 09:48:09.139214  295049 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:48:09.139365  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:48:09.153718  295049 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:48:09.153795  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:48:09.153892  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.171825  295049 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:48:09.171885  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:48:09.176316  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.178907  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:48:09.181857  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:48:09.185498  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:48:09.188357  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:48:09.194027  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:48:09.194115  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:48:09.194210  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.223202  295049 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 09:48:09.229337  295049 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:48:09.230318  295049 addons.go:239] Setting addon default-storageclass=true in "addons-714840"
	I1101 09:48:09.230360  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:09.230903  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:09.237476  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:48:09.237498  295049 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:48:09.237572  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.265382  295049 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:48:09.268217  295049 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:48:09.268241  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:48:09.268308  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.284877  295049 host.go:66] Checking if "addons-714840" exists ...
	W1101 09:48:09.286719  295049 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:48:09.297832  295049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:48:09.297965  295049 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:48:09.302462  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.306406  295049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:48:09.307752  295049 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:48:09.310704  295049 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:48:09.310728  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:48:09.310803  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.312694  295049 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:48:09.312718  295049 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:48:09.312794  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.322817  295049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:48:09.323116  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.326201  295049 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:48:09.326219  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:48:09.326281  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.357130  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.372818  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.373674  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.399008  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.399029  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.438085  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.461509  295049 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:48:09.461533  295049 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:48:09.461599  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:09.463050  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.480247  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.488871  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.504208  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.519789  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:09.519855  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	W1101 09:48:09.522342  295049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:48:09.522391  295049 retry.go:31] will retry after 133.988056ms: ssh: handshake failed: EOF
	I1101 09:48:09.533810  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	W1101 09:48:09.535691  295049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:48:09.535714  295049 retry.go:31] will retry after 164.928826ms: ssh: handshake failed: EOF
	I1101 09:48:09.614214  295049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1101 09:48:09.708114  295049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:48:09.708147  295049 retry.go:31] will retry after 222.486304ms: ssh: handshake failed: EOF
	I1101 09:48:10.072314  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:48:10.072340  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:48:10.084118  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:48:10.149757  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:48:10.149848  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:48:10.191564  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:48:10.198987  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:48:10.216258  295049 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:48:10.216323  295049 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:48:10.236736  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:48:10.250007  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:48:10.250083  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:48:10.254066  295049 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:48:10.254137  295049 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:48:10.298643  295049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:48:10.298727  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:48:10.309494  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:48:10.314401  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:48:10.320905  295049 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:10.321154  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:48:10.336579  295049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:48:10.336652  295049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:48:10.363204  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:48:10.364683  295049 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.238926314s)
	I1101 09:48:10.364748  295049 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 09:48:10.366515  295049 node_ready.go:35] waiting up to 6m0s for node "addons-714840" to be "Ready" ...
	I1101 09:48:10.406283  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:48:10.414073  295049 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:48:10.414147  295049 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:48:10.417003  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:48:10.447223  295049 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:48:10.447302  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:48:10.473979  295049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:48:10.474048  295049 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:48:10.549762  295049 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:48:10.549836  295049 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:48:10.551419  295049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:48:10.551481  295049 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:48:10.605107  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:48:10.605183  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:48:10.610410  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:10.648074  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:48:10.673710  295049 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:48:10.673778  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:48:10.676795  295049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:48:10.676866  295049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:48:10.693457  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:48:10.833862  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:48:10.837889  295049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:48:10.837963  295049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:48:10.844192  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:48:10.844217  295049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:48:10.869045  295049 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-714840" context rescaled to 1 replicas
	I1101 09:48:11.011118  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:48:11.011144  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:48:11.036667  295049 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:48:11.036694  295049 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:48:11.343272  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:48:11.343304  295049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:48:11.398963  295049 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:48:11.398994  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:48:11.412717  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.32851794s)
	I1101 09:48:11.617208  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:48:11.617282  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:48:11.659699  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:48:11.711489  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.519837078s)
	I1101 09:48:11.801403  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:48:11.801482  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:48:12.051120  295049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:48:12.051196  295049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:48:12.177086  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1101 09:48:12.383894  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:13.020603  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.821525859s)
	I1101 09:48:13.020717  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.783911931s)
	I1101 09:48:13.586990  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.277405209s)
	I1101 09:48:13.587104  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.272627556s)
	I1101 09:48:14.214156  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.850872015s)
	W1101 09:48:14.412142  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:15.365966  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.948878098s)
	I1101 09:48:15.366197  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.755502829s)
	W1101 09:48:15.366217  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:15.366233  295049 retry.go:31] will retry after 136.638648ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:15.366293  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.718145099s)
	I1101 09:48:15.366303  295049 addons.go:480] Verifying addon metrics-server=true in "addons-714840"
	I1101 09:48:15.366333  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.672802413s)
	I1101 09:48:15.366341  295049 addons.go:480] Verifying addon registry=true in "addons-714840"
	I1101 09:48:15.366602  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.9602943s)
	I1101 09:48:15.366724  295049 addons.go:480] Verifying addon ingress=true in "addons-714840"
	I1101 09:48:15.367048  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.533110877s)
	I1101 09:48:15.367385  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.707602549s)
	W1101 09:48:15.368671  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:48:15.368696  295049 retry.go:31] will retry after 348.42652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:48:15.369582  295049 out.go:179] * Verifying registry addon...
	I1101 09:48:15.369613  295049 out.go:179] * Verifying ingress addon...
	I1101 09:48:15.371487  295049 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-714840 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:48:15.375262  295049 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:48:15.375327  295049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:48:15.406452  295049 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:48:15.406472  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:15.406950  295049 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:48:15.406966  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:15.503690  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:15.713204  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.536019399s)
	I1101 09:48:15.713239  295049 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-714840"
	I1101 09:48:15.716600  295049 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:48:15.717877  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:48:15.721704  295049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:48:15.734406  295049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:48:15.734480  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:15.881249  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:15.881674  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:16.229221  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:16.379797  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:16.380479  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:16.625750  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.121972438s)
	W1101 09:48:16.625786  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:16.625807  295049 retry.go:31] will retry after 542.876452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:16.725738  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:16.869980  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:16.881703  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:16.881764  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:17.018829  295049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:48:17.018961  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:17.037189  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:17.162371  295049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:48:17.169674  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:17.176857  295049 addons.go:239] Setting addon gcp-auth=true in "addons-714840"
	I1101 09:48:17.177003  295049 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:48:17.177474  295049 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:48:17.203647  295049 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:48:17.203709  295049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:48:17.226251  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:17.226354  295049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:48:17.379945  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:17.380420  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:17.725629  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:17.880360  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:17.880708  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:48:18.020686  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:18.020802  295049 retry.go:31] will retry after 313.866685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:18.023926  295049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:48:18.026901  295049 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:48:18.029822  295049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:48:18.029860  295049 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:48:18.044444  295049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:48:18.044532  295049 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:48:18.059372  295049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:48:18.059450  295049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:48:18.073985  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:48:18.224723  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:18.335077  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:18.381075  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:18.381492  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:18.637384  295049 addons.go:480] Verifying addon gcp-auth=true in "addons-714840"
	I1101 09:48:18.640975  295049 out.go:179] * Verifying gcp-auth addon...
	I1101 09:48:18.644589  295049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:48:18.649201  295049 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:48:18.649270  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:18.750157  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:18.870476  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:18.879708  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:18.880757  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:19.148686  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:19.225902  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:19.282933  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:19.282965  295049 retry.go:31] will retry after 1.138525801s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:19.379160  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:19.379338  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:19.648566  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:19.725694  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:19.879343  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:19.879667  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:20.147823  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:20.225529  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:20.378800  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:20.379095  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:20.422383  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:20.647814  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:20.725040  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:20.879477  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:20.879556  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:21.147600  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:48:21.221797  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:21.221830  295049 retry.go:31] will retry after 1.895111913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:21.224232  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:21.370069  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:21.379334  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:21.379622  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:21.647474  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:21.725405  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:21.878971  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:21.879267  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:22.149236  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:22.225128  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:22.379352  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:22.379507  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:22.649187  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:22.724969  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:22.879475  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:22.879630  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:23.117974  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:23.148176  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:23.225958  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:23.370187  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:23.381092  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:23.381487  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:23.647805  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:23.725459  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:23.881214  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:23.881478  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:48:23.927708  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:23.927740  295049 retry.go:31] will retry after 1.237875137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:24.147953  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:24.224907  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:24.380574  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:24.381008  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:24.648486  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:24.725968  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:24.879508  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:24.879567  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:25.148261  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:25.166413  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:25.225319  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:25.371947  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:25.379885  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:25.380323  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:25.647953  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:25.725497  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:25.881103  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:25.881532  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:48:25.978901  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:25.978934  295049 retry.go:31] will retry after 1.740039919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:26.147968  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:26.224733  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:26.378900  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:26.379048  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:26.647985  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:26.725286  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:26.880818  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:26.881464  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:27.147498  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:27.225299  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:27.381053  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:27.381451  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:27.648547  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:27.719686  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:27.725296  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:27.869795  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:27.880100  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:27.880445  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:28.148060  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:28.225804  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:28.379211  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:28.379558  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:48:28.522635  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:28.522676  295049 retry.go:31] will retry after 6.367920624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:28.647521  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:28.725674  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:28.878141  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:28.878507  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:29.147630  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:29.225698  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:29.378602  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:29.378812  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:29.647873  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:29.724504  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:29.870175  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:29.879419  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:29.879816  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:30.148238  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:30.225400  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:30.379725  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:30.379999  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:30.648744  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:30.724556  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:30.879131  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:30.879450  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:31.148441  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:31.225521  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:31.378989  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:31.379076  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:31.648200  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:31.725467  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:31.870453  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:31.878815  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:31.878975  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:32.148125  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:32.224820  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:32.378782  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:32.378976  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:32.648163  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:32.725063  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:32.879485  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:32.879492  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:33.148325  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:33.225559  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:33.378794  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:33.379081  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:33.648122  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:33.724711  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:33.878638  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:33.878784  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:34.147959  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:34.225409  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:34.370553  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:34.382200  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:34.389899  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:34.648170  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:34.725598  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:34.879315  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:34.879414  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:34.891612  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:35.148032  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:35.224903  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:35.380593  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:35.381020  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:35.649054  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:48:35.716994  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:35.717026  295049 retry.go:31] will retry after 7.523911616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:35.725411  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:35.878140  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:35.878955  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:36.147985  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:36.224991  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:36.379952  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:36.379986  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:36.648126  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:36.724794  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:36.869386  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:36.880310  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:36.880606  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:37.147720  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:37.225508  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:37.379149  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:37.379507  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:37.647511  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:37.725406  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:37.879279  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:37.879760  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:38.147634  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:38.225536  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:38.378967  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:38.379113  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:38.648140  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:38.725117  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:38.870210  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:38.879414  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:38.879611  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:39.147733  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:39.224889  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:39.378440  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:39.379046  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:39.649182  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:39.725004  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:39.879271  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:39.879649  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:40.148052  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:40.225036  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:40.379576  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:40.379616  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:40.648265  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:40.725200  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:40.870652  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:40.879361  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:40.879650  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:41.147356  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:41.225531  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:41.378787  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:41.378892  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:41.647933  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:41.724996  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:41.878727  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:41.878877  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:42.148689  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:42.225918  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:42.378415  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:42.378625  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:42.648268  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:42.725528  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:42.878409  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:42.878564  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:43.147653  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:43.229909  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:43.242081  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:48:43.370380  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:43.378710  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:43.378926  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:43.648310  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:43.727192  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:43.879888  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:43.879993  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:48:44.062971  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:44.063004  295049 retry.go:31] will retry after 8.722094097s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:44.147929  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:44.224729  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:44.378321  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:44.378547  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:44.647503  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:44.725196  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:44.880030  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:44.880174  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:45.150221  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:45.225222  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:45.373340  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:45.379420  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:45.379903  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:45.647906  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:45.724710  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:45.879466  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:45.880025  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:46.147952  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:46.226877  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:46.378660  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:46.378912  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:46.648311  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:46.725338  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:46.879215  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:46.880882  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:47.148059  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:47.225451  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:47.379589  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:47.380038  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:47.648054  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:47.724865  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:47.870414  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:47.879299  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:47.879551  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:48.147619  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:48.225332  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:48.379524  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:48.379672  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:48.647917  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:48.724880  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:48.879432  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:48.879543  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:49.147836  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:49.225251  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:49.379035  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:49.379118  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:49.648240  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:49.725400  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:49.879243  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:49.881163  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:50.148233  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:50.225285  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:48:50.370082  295049 node_ready.go:57] node "addons-714840" has "Ready":"False" status (will retry)
	I1101 09:48:50.378896  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:50.379083  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:50.648274  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:50.725271  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:50.879964  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:50.880050  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:51.148636  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:51.225575  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:51.379190  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:51.379332  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:51.648371  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:51.725047  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:51.878931  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:51.878940  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:52.148136  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:52.225190  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:52.370973  295049 node_ready.go:49] node "addons-714840" is "Ready"
	I1101 09:48:52.371013  295049 node_ready.go:38] duration metric: took 42.004277348s for node "addons-714840" to be "Ready" ...
	I1101 09:48:52.371027  295049 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:48:52.371134  295049 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:48:52.395011  295049 api_server.go:72] duration metric: took 43.568376456s to wait for apiserver process to appear ...
	I1101 09:48:52.395094  295049 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:48:52.395137  295049 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 09:48:52.412796  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:52.413015  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:52.432551  295049 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 09:48:52.440655  295049 api_server.go:141] control plane version: v1.34.1
	I1101 09:48:52.440734  295049 api_server.go:131] duration metric: took 45.610034ms to wait for apiserver health ...
	I1101 09:48:52.440759  295049 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:48:52.481851  295049 system_pods.go:59] 18 kube-system pods found
	I1101 09:48:52.482022  295049 system_pods.go:61] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending
	I1101 09:48:52.482046  295049 system_pods.go:61] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending
	I1101 09:48:52.482113  295049 system_pods.go:61] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending
	I1101 09:48:52.482144  295049 system_pods.go:61] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:52.482182  295049 system_pods.go:61] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:52.482227  295049 system_pods.go:61] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:52.482312  295049 system_pods.go:61] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:52.482338  295049 system_pods.go:61] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending
	I1101 09:48:52.482360  295049 system_pods.go:61] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:52.482397  295049 system_pods.go:61] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:52.482479  295049 system_pods.go:61] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending
	I1101 09:48:52.482507  295049 system_pods.go:61] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending
	I1101 09:48:52.482528  295049 system_pods.go:61] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending
	I1101 09:48:52.482565  295049 system_pods.go:61] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending
	I1101 09:48:52.482650  295049 system_pods.go:61] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending
	I1101 09:48:52.482677  295049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending
	I1101 09:48:52.482723  295049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending
	I1101 09:48:52.482746  295049 system_pods.go:61] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending
	I1101 09:48:52.482814  295049 system_pods.go:74] duration metric: took 42.033793ms to wait for pod list to return data ...
	I1101 09:48:52.482841  295049 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:48:52.522030  295049 default_sa.go:45] found service account: "default"
	I1101 09:48:52.522107  295049 default_sa.go:55] duration metric: took 39.225238ms for default service account to be created ...
	I1101 09:48:52.522146  295049 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:48:52.551484  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:52.551566  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending
	I1101 09:48:52.551588  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending
	I1101 09:48:52.551610  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending
	I1101 09:48:52.551645  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending
	I1101 09:48:52.551674  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:52.551698  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:52.551736  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:52.551761  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:52.551782  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending
	I1101 09:48:52.551817  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:52.551842  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:52.551863  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending
	I1101 09:48:52.551899  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending
	I1101 09:48:52.551924  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending
	I1101 09:48:52.551951  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:52.551986  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending
	I1101 09:48:52.552012  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending
	I1101 09:48:52.552035  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending
	I1101 09:48:52.552069  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending
	I1101 09:48:52.552102  295049 retry.go:31] will retry after 212.35502ms: missing components: kube-dns
	I1101 09:48:52.654070  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:52.786070  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:48:52.809713  295049 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:48:52.809789  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:52.819163  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:52.819250  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:48:52.819272  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending
	I1101 09:48:52.819296  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending
	I1101 09:48:52.819329  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending
	I1101 09:48:52.819353  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:52.819385  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:52.819419  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:52.819444  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:52.819465  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending
	I1101 09:48:52.819499  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:52.819532  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:52.819556  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:48:52.819588  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending
	I1101 09:48:52.819614  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:48:52.819642  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:52.819675  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending
	I1101 09:48:52.819704  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:52.819725  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending
	I1101 09:48:52.819760  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending
	I1101 09:48:52.819797  295049 retry.go:31] will retry after 238.204487ms: missing components: kube-dns
	I1101 09:48:52.914771  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:52.917278  295049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:48:52.917300  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:53.065935  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:53.066024  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:48:53.066050  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:48:53.066099  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:48:53.066123  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending
	I1101 09:48:53.066145  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:53.066180  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:53.066205  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:53.066227  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:53.066269  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:48:53.066317  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:53.066352  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:53.066382  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:48:53.066403  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending
	I1101 09:48:53.066443  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:48:53.066469  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:53.066491  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending
	I1101 09:48:53.066535  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:53.066561  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending
	I1101 09:48:53.066584  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:48:53.066630  295049 retry.go:31] will retry after 414.475783ms: missing components: kube-dns
	I1101 09:48:53.159758  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:53.234104  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:53.379999  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:53.380167  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:53.487796  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:53.487888  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:48:53.487913  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:48:53.487953  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:48:53.487977  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:48:53.487997  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:53.488029  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:53.488053  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:53.488074  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:53.488113  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:48:53.488138  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:53.488160  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:53.488199  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:48:53.488226  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:48:53.488252  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:48:53.488294  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:53.488315  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:48:53.488353  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:53.488379  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:53.488400  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:48:53.488447  295049 retry.go:31] will retry after 575.227137ms: missing components: kube-dns
	I1101 09:48:53.658223  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:53.756356  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:53.880043  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:53.880512  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:54.070836  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:54.070922  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:48:54.070949  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:48:54.070991  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:48:54.071021  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:48:54.071043  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:54.071081  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:54.071105  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:54.071126  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:54.071165  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:48:54.071188  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:54.071212  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:54.071250  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:48:54.071279  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:48:54.071306  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:48:54.071346  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:54.071368  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:48:54.071407  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:54.071433  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:54.071456  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:48:54.071502  295049 retry.go:31] will retry after 507.349859ms: missing components: kube-dns
	I1101 09:48:54.149118  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:54.225426  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:54.380537  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:54.380493  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:54.413230  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.6270613s)
	W1101 09:48:54.413319  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:54.413354  295049 retry.go:31] will retry after 10.756894019s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:48:54.593635  295049 system_pods.go:86] 19 kube-system pods found
	I1101 09:48:54.593718  295049 system_pods.go:89] "coredns-66bc5c9577-jxfw2" [0a627d21-fc25-4313-acb1-65d33cee1d5e] Running
	I1101 09:48:54.593745  295049 system_pods.go:89] "csi-hostpath-attacher-0" [f0088bfc-2012-47ae-b57b-a9b2d45a8daa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:48:54.593788  295049 system_pods.go:89] "csi-hostpath-resizer-0" [23d6c0b5-810b-4598-b21c-9e7c1f7036ed] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:48:54.593813  295049 system_pods.go:89] "csi-hostpathplugin-prqx4" [4811dc91-b57e-4d37-b391-0f7da23a7197] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:48:54.593831  295049 system_pods.go:89] "etcd-addons-714840" [d20e623a-965e-4389-b111-a579dc729308] Running
	I1101 09:48:54.593852  295049 system_pods.go:89] "kindnet-thg89" [21f87cd1-d5f0-4d7a-994b-1678d2b778f7] Running
	I1101 09:48:54.593884  295049 system_pods.go:89] "kube-apiserver-addons-714840" [3fb01b12-141f-4f26-af9a-a2fe0def8b05] Running
	I1101 09:48:54.593909  295049 system_pods.go:89] "kube-controller-manager-addons-714840" [1f2f5a0b-9cee-463b-88f5-db08eb34c1cd] Running
	I1101 09:48:54.593933  295049 system_pods.go:89] "kube-ingress-dns-minikube" [6d1920e2-db6b-4ab6-b984-b733d6f18a34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:48:54.593968  295049 system_pods.go:89] "kube-proxy-jkzc6" [a9dde5aa-617a-41ee-96aa-9591572ec9d8] Running
	I1101 09:48:54.593994  295049 system_pods.go:89] "kube-scheduler-addons-714840" [b6e64b7c-0cd7-45ad-b175-18232fb9a300] Running
	I1101 09:48:54.594024  295049 system_pods.go:89] "metrics-server-85b7d694d7-mshff" [94823452-3826-4670-81f4-bb22ab5bcb08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:48:54.594079  295049 system_pods.go:89] "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:48:54.594107  295049 system_pods.go:89] "registry-6b586f9694-czvz6" [dd7c69d6-94a2-4f88-b89d-cb0d58275e4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:48:54.594130  295049 system_pods.go:89] "registry-creds-764b6fb674-bnkwh" [4d74e2c4-c5a3-45e3-9a6e-e70783d9e315] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:48:54.594166  295049 system_pods.go:89] "registry-proxy-w2s6j" [f1705d9b-3591-4d10-8c0c-439f7819099a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:48:54.594192  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fzk67" [03526494-7a89-4467-af9e-7f1078e50140] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:54.594225  295049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gk5gb" [69ed7d03-92f0-435e-9712-1087f969c6ea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:48:54.594270  295049 system_pods.go:89] "storage-provisioner" [362127e2-c2e4-4578-9804-21619ae96deb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:48:54.594295  295049 system_pods.go:126] duration metric: took 2.072124766s to wait for k8s-apps to be running ...
	I1101 09:48:54.594329  295049 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:48:54.594422  295049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:48:54.619374  295049 system_svc.go:56] duration metric: took 25.034852ms WaitForService to wait for kubelet
	I1101 09:48:54.619450  295049 kubeadm.go:587] duration metric: took 45.792819992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:48:54.619494  295049 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:48:54.622914  295049 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:48:54.622996  295049 node_conditions.go:123] node cpu capacity is 2
	I1101 09:48:54.623023  295049 node_conditions.go:105] duration metric: took 3.511174ms to run NodePressure ...
	I1101 09:48:54.623049  295049 start.go:242] waiting for startup goroutines ...
	I1101 09:48:54.691910  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:54.725824  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:54.879733  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:54.879853  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:55.148557  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:55.249203  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:55.380866  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:55.381292  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:55.648632  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:55.725532  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:55.883117  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:55.883589  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:56.147880  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:56.227229  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:56.384762  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:56.385609  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:56.648373  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:56.729362  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:56.885285  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:56.886019  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:57.148271  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:57.227130  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:57.381095  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:57.381313  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:57.650241  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:57.750702  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:57.893168  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:57.893488  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:58.147727  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:58.225932  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:58.380720  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:58.380711  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:58.648293  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:58.726356  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:58.879734  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:58.880771  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:59.147893  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:59.224866  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:59.379571  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:59.380356  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:48:59.647958  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:48:59.725432  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:48:59.879298  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:48:59.880219  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:00.152463  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:00.239649  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:00.392413  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:00.426980  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:00.649257  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:00.726047  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:00.880864  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:00.881272  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:01.148722  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:01.225154  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:01.381149  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:01.381973  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:01.649018  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:01.726967  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:01.881272  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:01.881773  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:02.148094  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:02.224917  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:02.380478  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:02.380663  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:02.648583  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:02.726140  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:02.879871  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:02.880230  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:03.148683  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:03.250690  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:03.379192  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:03.379346  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:03.648902  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:03.725887  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:03.881270  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:03.881558  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:04.147422  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:04.225984  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:04.379712  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:04.381082  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:04.648232  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:04.726312  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:04.879901  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:04.879998  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:05.149010  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:05.171273  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:49:05.224762  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:05.380193  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:05.380707  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:05.675707  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:05.770071  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:05.880798  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:05.881364  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:06.149203  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:06.226123  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:06.379170  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:06.379411  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:06.526232  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.354920353s)
	W1101 09:49:06.526312  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:49:06.526345  295049 retry.go:31] will retry after 19.510029492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:49:06.648370  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:06.725655  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:06.883386  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:06.884261  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:07.148723  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:07.225090  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:07.380388  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:07.381052  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:07.675580  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:07.763908  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:07.878488  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:07.879483  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:08.148203  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:08.226447  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:08.381932  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:08.382471  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:08.648282  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:08.726004  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:08.880832  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:08.881108  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:09.148280  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:09.226017  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:09.380255  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:09.380854  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:09.648477  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:09.726153  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:09.879155  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:09.879808  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:10.148232  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:10.225977  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:10.380363  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:10.380778  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:10.648045  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:10.725413  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:10.879701  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:10.879965  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:11.148123  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:11.225984  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:11.380942  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:11.381193  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:11.648686  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:11.725702  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:11.881029  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:11.881540  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:12.147797  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:12.225631  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:12.379235  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:12.380101  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:12.649908  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:12.725230  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:12.879687  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:12.879954  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:13.148306  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:13.225825  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:13.382038  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:13.382414  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:13.648439  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:13.726785  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:13.879513  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:13.879686  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:14.148471  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:14.226326  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:14.379977  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:14.380592  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:14.649431  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:14.726046  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:14.880387  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:14.881111  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:15.148951  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:15.225507  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:15.381645  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:15.381810  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:15.648259  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:15.725775  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:15.880554  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:15.881366  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:16.147908  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:16.225559  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:16.380113  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:16.380622  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:16.648003  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:16.725391  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:16.882937  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:16.882940  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:17.147851  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:17.224527  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:17.379537  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:17.379719  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:17.648616  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:17.725757  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:17.883769  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:17.884278  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:18.148759  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:18.250454  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:18.379737  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:18.381477  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:18.650268  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:18.739035  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:18.880970  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:18.881161  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:19.148545  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:19.225733  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:19.379597  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:19.379784  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:19.648568  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:19.729927  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:19.879938  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:19.880636  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:20.147843  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:20.225238  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:20.380631  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:20.380786  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:20.647885  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:20.725411  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:20.879665  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:20.879803  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:21.148371  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:21.226345  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:21.378879  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:21.379586  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:21.647892  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:21.725541  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:21.879849  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:21.880325  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:22.148020  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:22.226205  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:22.380117  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:22.380216  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:22.650129  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:22.726305  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:22.885370  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:22.886733  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:23.148514  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:23.226878  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:23.381099  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:23.381569  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:23.647792  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:23.725435  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:23.880453  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:23.880845  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:24.147532  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:24.225791  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:24.380277  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:24.380765  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:24.651502  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:24.726161  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:24.881488  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:24.881981  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:25.148493  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:25.225879  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:25.379032  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:25.380493  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:25.648820  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:25.728518  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:25.879674  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:25.880113  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:26.037417  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:49:26.148146  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:26.226862  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:26.380049  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:26.380226  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:26.649348  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:26.726590  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:26.880278  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:26.880573  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:27.129350  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.091888914s)
	W1101 09:49:27.129387  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:49:27.129407  295049 retry.go:31] will retry after 31.459578892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:49:27.148471  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:27.226158  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:27.378709  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:27.379506  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:27.648001  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:27.725581  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:27.881783  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:27.882198  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:28.148623  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:28.225708  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:28.380258  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:28.380436  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:28.647525  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:28.726058  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:28.880666  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:28.880797  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:29.148066  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:29.230961  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:29.379510  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:29.380004  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:29.647966  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:29.725215  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:29.878950  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:29.879272  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:30.148586  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:30.226604  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:30.379232  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:30.379630  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:30.648069  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:30.726512  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:30.879844  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:30.880527  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:31.148507  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:31.225803  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:31.381190  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:31.382478  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:31.647866  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:31.725736  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:31.879699  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:31.879893  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:32.147862  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:32.226841  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:32.379683  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:49:32.380140  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:32.648714  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:32.749856  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:32.879381  295049 kapi.go:107] duration metric: took 1m17.504049153s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:49:32.879568  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:33.150311  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:33.252332  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:33.378583  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:33.648676  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:33.726204  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:33.878296  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:34.149330  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:34.225956  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:34.379267  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:34.650592  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:34.728563  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:34.879421  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:35.148886  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:35.225922  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:35.379357  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:35.648394  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:35.726346  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:35.879253  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:36.149117  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:36.226418  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:36.379017  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:36.648069  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:36.725736  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:36.881135  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:37.148400  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:37.225867  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:37.378820  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:37.648465  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:37.726021  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:37.878851  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:38.148801  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:38.226155  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:38.380796  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:38.650394  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:38.728295  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:38.878697  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:39.147765  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:39.225294  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:39.379344  295049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:49:39.647476  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:39.748669  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:39.879782  295049 kapi.go:107] duration metric: took 1m24.504523441s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:49:40.148772  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:40.225268  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:40.649348  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:40.726305  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:41.148651  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:41.225536  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:41.648057  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:41.725887  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:42.151203  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:42.225557  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:42.647673  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:42.726538  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:43.151246  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:43.250433  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:43.648453  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:43.727249  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:44.148257  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:44.225934  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:44.651167  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:44.758740  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:45.163045  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:45.261704  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:45.647603  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:45.725780  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:46.149565  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:46.226318  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:46.648134  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:46.725514  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:47.148331  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:47.249161  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:47.648824  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:47.725602  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:48.148356  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:48.226417  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:48.648588  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:48.751215  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:49.148623  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:49.225922  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:49.647501  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:49.725677  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:50.149725  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:50.225206  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:50.647765  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:50.725252  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:49:51.148179  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:51.225403  295049 kapi.go:107] duration metric: took 1m35.503695445s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:49:51.647697  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:52.148747  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:52.649036  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:53.147592  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:53.648750  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:54.148678  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:54.648894  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:55.148522  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:55.647906  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:56.148822  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:56.648512  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:57.148175  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:57.648127  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:58.147795  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:58.589188  295049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:49:58.648476  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:59.151377  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:59.649114  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:49:59.984830  295049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.395595613s)
	W1101 09:49:59.984881  295049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:49:59.985001  295049 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:50:00.149784  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:50:00.650355  295049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:50:01.148492  295049 kapi.go:107] duration metric: took 1m42.503902767s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:50:01.151525  295049 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-714840 cluster.
	I1101 09:50:01.154273  295049 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:50:01.156904  295049 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:50:01.159930  295049 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, ingress-dns, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1101 09:50:01.162842  295049 addons.go:515] duration metric: took 1m52.335736601s for enable addons: enabled=[nvidia-device-plugin registry-creds ingress-dns amd-gpu-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1101 09:50:01.162913  295049 start.go:247] waiting for cluster config update ...
	I1101 09:50:01.162937  295049 start.go:256] writing updated cluster config ...
	I1101 09:50:01.163254  295049 ssh_runner.go:195] Run: rm -f paused
	I1101 09:50:01.167625  295049 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:50:01.171736  295049 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jxfw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.177952  295049 pod_ready.go:94] pod "coredns-66bc5c9577-jxfw2" is "Ready"
	I1101 09:50:01.177986  295049 pod_ready.go:86] duration metric: took 6.218377ms for pod "coredns-66bc5c9577-jxfw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.181353  295049 pod_ready.go:83] waiting for pod "etcd-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.187111  295049 pod_ready.go:94] pod "etcd-addons-714840" is "Ready"
	I1101 09:50:01.187147  295049 pod_ready.go:86] duration metric: took 5.758403ms for pod "etcd-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.189783  295049 pod_ready.go:83] waiting for pod "kube-apiserver-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.195901  295049 pod_ready.go:94] pod "kube-apiserver-addons-714840" is "Ready"
	I1101 09:50:01.195936  295049 pod_ready.go:86] duration metric: took 6.120358ms for pod "kube-apiserver-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.198818  295049 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.572385  295049 pod_ready.go:94] pod "kube-controller-manager-addons-714840" is "Ready"
	I1101 09:50:01.572424  295049 pod_ready.go:86] duration metric: took 373.574477ms for pod "kube-controller-manager-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:01.771953  295049 pod_ready.go:83] waiting for pod "kube-proxy-jkzc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:02.172312  295049 pod_ready.go:94] pod "kube-proxy-jkzc6" is "Ready"
	I1101 09:50:02.172341  295049 pod_ready.go:86] duration metric: took 400.361119ms for pod "kube-proxy-jkzc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:02.371946  295049 pod_ready.go:83] waiting for pod "kube-scheduler-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:02.772269  295049 pod_ready.go:94] pod "kube-scheduler-addons-714840" is "Ready"
	I1101 09:50:02.772299  295049 pod_ready.go:86] duration metric: took 400.323391ms for pod "kube-scheduler-addons-714840" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:50:02.772312  295049 pod_ready.go:40] duration metric: took 1.6046497s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:50:02.838511  295049 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:50:02.841638  295049 out.go:179] * Done! kubectl is now configured to use "addons-714840" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:50:30 addons-714840 crio[829]: time="2025-11-01T09:50:30.520315658Z" level=info msg="Started container" PID=5363 containerID=7d76d23d8a704f281b09c86de4dbe675fea6aca1643784a6b1175d78a699dc2a description=default/test-local-path/busybox id=2b6c0399-6c4b-4632-ae5d-6d816b977174 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a435dd3e424ee09a79b8f8a889524934d6d90dcd355855b09f40a08d4aec1239
	Nov 01 09:50:32 addons-714840 crio[829]: time="2025-11-01T09:50:32.094751847Z" level=info msg="Stopping pod sandbox: a435dd3e424ee09a79b8f8a889524934d6d90dcd355855b09f40a08d4aec1239" id=b50839aa-268d-4c97-af2b-18aa55a66c8f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:50:32 addons-714840 crio[829]: time="2025-11-01T09:50:32.095066754Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:a435dd3e424ee09a79b8f8a889524934d6d90dcd355855b09f40a08d4aec1239 UID:dd8856fe-807d-489d-9675-9adc9560e7ff NetNS:/var/run/netns/37524e9f-a981-43a5-9c93-0011a6aba1cd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40011b0630}] Aliases:map[]}"
	Nov 01 09:50:32 addons-714840 crio[829]: time="2025-11-01T09:50:32.095238358Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:50:32 addons-714840 crio[829]: time="2025-11-01T09:50:32.118864055Z" level=info msg="Stopped pod sandbox: a435dd3e424ee09a79b8f8a889524934d6d90dcd355855b09f40a08d4aec1239" id=b50839aa-268d-4c97-af2b-18aa55a66c8f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.39364313Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe/POD" id=dcc006ca-8a7b-4d38-a8dc-5740db8087a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.393727463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.429217496Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe Namespace:local-path-storage ID:e7763a31e62ecd7b62a01996bdb588730ad7422d9fce6806412ed186fab29bf5 UID:853fb37d-7b33-4721-8030-320d25f4c705 NetNS:/var/run/netns/01a48bdc-d8bc-4ef1-a168-c1f3ec189fcc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004bae30}] Aliases:map[]}"
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.429262354Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.450568182Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe Namespace:local-path-storage ID:e7763a31e62ecd7b62a01996bdb588730ad7422d9fce6806412ed186fab29bf5 UID:853fb37d-7b33-4721-8030-320d25f4c705 NetNS:/var/run/netns/01a48bdc-d8bc-4ef1-a168-c1f3ec189fcc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004bae30}] Aliases:map[]}"
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.450738874Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe for CNI network kindnet (type=ptp)"
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.460553899Z" level=info msg="Ran pod sandbox e7763a31e62ecd7b62a01996bdb588730ad7422d9fce6806412ed186fab29bf5 with infra container: local-path-storage/helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe/POD" id=dcc006ca-8a7b-4d38-a8dc-5740db8087a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.464641405Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=9494a0a8-bc1d-4966-a7c6-da977f5c1b13 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.469399792Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=05695271-55b7-4229-be7c-37e27d2ba4df name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.480686761Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe/helper-pod" id=c157c449-bee2-4c5f-b3af-baf5dc7766fe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.480859653Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.489506718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.490038988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.513033666Z" level=info msg="Created container 8a4ebda531d8cc81baf2cd8607c2e253ccdb053f8520fa08643a463a6a62f377: local-path-storage/helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe/helper-pod" id=c157c449-bee2-4c5f-b3af-baf5dc7766fe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.517544387Z" level=info msg="Starting container: 8a4ebda531d8cc81baf2cd8607c2e253ccdb053f8520fa08643a463a6a62f377" id=38176a3d-a842-4c42-b454-4bd36b059a0c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:50:33 addons-714840 crio[829]: time="2025-11-01T09:50:33.52735334Z" level=info msg="Started container" PID=5436 containerID=8a4ebda531d8cc81baf2cd8607c2e253ccdb053f8520fa08643a463a6a62f377 description=local-path-storage/helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe/helper-pod id=38176a3d-a842-4c42-b454-4bd36b059a0c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7763a31e62ecd7b62a01996bdb588730ad7422d9fce6806412ed186fab29bf5
	Nov 01 09:50:35 addons-714840 crio[829]: time="2025-11-01T09:50:35.111637993Z" level=info msg="Stopping pod sandbox: e7763a31e62ecd7b62a01996bdb588730ad7422d9fce6806412ed186fab29bf5" id=0cbf9083-80de-46e7-bd37-e8ea543e5e45 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:50:35 addons-714840 crio[829]: time="2025-11-01T09:50:35.112600946Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe Namespace:local-path-storage ID:e7763a31e62ecd7b62a01996bdb588730ad7422d9fce6806412ed186fab29bf5 UID:853fb37d-7b33-4721-8030-320d25f4c705 NetNS:/var/run/netns/01a48bdc-d8bc-4ef1-a168-c1f3ec189fcc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40011b0080}] Aliases:map[]}"
	Nov 01 09:50:35 addons-714840 crio[829]: time="2025-11-01T09:50:35.112980452Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe from CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:50:35 addons-714840 crio[829]: time="2025-11-01T09:50:35.138992843Z" level=info msg="Stopped pod sandbox: e7763a31e62ecd7b62a01996bdb588730ad7422d9fce6806412ed186fab29bf5" id=0cbf9083-80de-46e7-bd37-e8ea543e5e45 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	8a4ebda531d8c       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   e7763a31e62ec       helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe   local-path-storage
	7d76d23d8a704       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   a435dd3e424ee       test-local-path                                              default
	05382c122e41e       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   daef0b6406878       helper-pod-create-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe   local-path-storage
	9fd9d44fbc02e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          29 seconds ago       Running             busybox                                  0                   d5f668a69c058       busybox                                                      default
	cfb354a027b1d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 34 seconds ago       Running             gcp-auth                                 0                   39f70f641227a       gcp-auth-78565c9fb4-rfbql                                    gcp-auth
	10a1c7de04e0d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          45 seconds ago       Running             csi-snapshotter                          0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                                     kube-system
	4a127573889cd       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          46 seconds ago       Running             csi-provisioner                          0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                                     kube-system
	0d38db82f09d9       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            48 seconds ago       Running             liveness-probe                           0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                                     kube-system
	7dcafd9990f60       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           48 seconds ago       Running             hostpath                                 0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                                     kube-system
	290dbe24a3813       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                50 seconds ago       Running             node-driver-registrar                    0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                                     kube-system
	09545d05e577c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            52 seconds ago       Running             gadget                                   0                   9f997d17c414a       gadget-lhntn                                                 gadget
	9abf6837f0e01       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             55 seconds ago       Running             controller                               0                   017c407ac9a7f       ingress-nginx-controller-675c5ddd98-9bmq7                    ingress-nginx
	651635f68ebde       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              patch                                    0                   e8a7526bd628e       gcp-auth-certs-patch-k4dwf                                   gcp-auth
	e6752a08771bf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   fcb1e6a1535ba       gcp-auth-certs-create-fwlg7                                  gcp-auth
	57fd4de0c99ca       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   af2005f3c652a       registry-proxy-w2s6j                                         kube-system
	57568bd94e7af       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   cd9c12bb7d3f7       registry-6b586f9694-czvz6                                    kube-system
	8972f335d55fe       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   8bfe43ba216e4       csi-hostpath-resizer-0                                       kube-system
	39741cf195269       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   85e40d0c7dd24       nvidia-device-plugin-daemonset-2t6gg                         kube-system
	7a84a50fa7c2b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   dcf80e0a3865f       csi-hostpathplugin-prqx4                                     kube-system
	ef27f5b38a203       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   d1132e6497f59       local-path-provisioner-648f6765c9-bmh8h                      local-path-storage
	3c062f36827d4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              patch                                    0                   4317d0e9a5be0       ingress-nginx-admission-patch-8mgj2                          ingress-nginx
	7fbdc489ecd4a       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   08fc0c5637c5e       cloud-spanner-emulator-6f9fcf858b-jlz98                      default
	656c40399f18d       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   89f2e06945e17       kube-ingress-dns-minikube                                    kube-system
	68903857276a8       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   6ab199cf77d2c       csi-hostpath-attacher-0                                      kube-system
	a1a58b7ec669a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   e98d252727bf9       snapshot-controller-7d9fbc56b8-gk5gb                         kube-system
	f957f816c5c98       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   d5c146311f97e       ingress-nginx-admission-create-99jl2                         ingress-nginx
	b88b182078ac0       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   5d18a82873c9a       yakd-dashboard-5ff678cb9-9rb44                               yakd-dashboard
	4fbf88d999b23       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   2bcf0ff74fa46       snapshot-controller-7d9fbc56b8-fzk67                         kube-system
	678a88e760bce       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   0a46dac934b61       metrics-server-85b7d694d7-mshff                              kube-system
	4e5de8a419785       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   387c0f82ee9d8       coredns-66bc5c9577-jxfw2                                     kube-system
	c0ddb9895a9b9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   77761049a265d       storage-provisioner                                          kube-system
	5b14178d10461       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   9a849f07a6959       kindnet-thg89                                                kube-system
	6949baeb846a9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   f4c22e1ebe2a9       kube-proxy-jkzc6                                             kube-system
	a35a59e2848f6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   7225d0238f3f8       kube-apiserver-addons-714840                                 kube-system
	15771f960cfb3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   6ede78832e2e8       kube-scheduler-addons-714840                                 kube-system
	17dd29eab394d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   c19a2f6c36fc7       kube-controller-manager-addons-714840                        kube-system
	5fabe274c8207       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   7e7dd7e85c3a6       etcd-addons-714840                                           kube-system
	
	
	==> coredns [4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc] <==
	[INFO] 10.244.0.11:42245 - 25443 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002073361s
	[INFO] 10.244.0.11:42245 - 63629 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000127615s
	[INFO] 10.244.0.11:42245 - 10187 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00008266s
	[INFO] 10.244.0.11:59289 - 15405 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000194898s
	[INFO] 10.244.0.11:59289 - 15643 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000283383s
	[INFO] 10.244.0.11:52453 - 18234 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000134721s
	[INFO] 10.244.0.11:52453 - 18021 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070531s
	[INFO] 10.244.0.11:46028 - 36384 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084424s
	[INFO] 10.244.0.11:46028 - 36195 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072s
	[INFO] 10.244.0.11:37955 - 60678 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.000834205s
	[INFO] 10.244.0.11:37955 - 61123 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001343444s
	[INFO] 10.244.0.11:37145 - 47621 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000119681s
	[INFO] 10.244.0.11:37145 - 47481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158541s
	[INFO] 10.244.0.21:43557 - 44662 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00022808s
	[INFO] 10.244.0.21:58360 - 24108 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000118803s
	[INFO] 10.244.0.21:52709 - 11444 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000177339s
	[INFO] 10.244.0.21:55031 - 34502 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000309056s
	[INFO] 10.244.0.21:38632 - 52286 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000183747s
	[INFO] 10.244.0.21:40253 - 63330 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106413s
	[INFO] 10.244.0.21:60503 - 60323 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002130667s
	[INFO] 10.244.0.21:60446 - 5681 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001965349s
	[INFO] 10.244.0.21:50287 - 55702 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001570031s
	[INFO] 10.244.0.21:38799 - 65285 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002212702s
	[INFO] 10.244.0.23:54429 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000203571s
	[INFO] 10.244.0.23:39702 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106306s
	
	
	==> describe nodes <==
	Name:               addons-714840
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-714840
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=addons-714840
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_48_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-714840
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-714840"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:48:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-714840
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:50:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:50:07 +0000   Sat, 01 Nov 2025 09:47:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:50:07 +0000   Sat, 01 Nov 2025 09:47:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:50:07 +0000   Sat, 01 Nov 2025 09:47:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:50:07 +0000   Sat, 01 Nov 2025 09:48:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-714840
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                f734a300-1b07-43a9-9d01-10886b98b0b1
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  default                     cloud-spanner-emulator-6f9fcf858b-jlz98      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-lhntn                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gcp-auth                    gcp-auth-78565c9fb4-rfbql                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9bmq7    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m20s
	  kube-system                 coredns-66bc5c9577-jxfw2                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 csi-hostpathplugin-prqx4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 etcd-addons-714840                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m31s
	  kube-system                 kindnet-thg89                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m26s
	  kube-system                 kube-apiserver-addons-714840                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-addons-714840        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-jkzc6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-scheduler-addons-714840                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 metrics-server-85b7d694d7-mshff              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m22s
	  kube-system                 nvidia-device-plugin-daemonset-2t6gg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 registry-6b586f9694-czvz6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 registry-creds-764b6fb674-bnkwh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 registry-proxy-w2s6j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 snapshot-controller-7d9fbc56b8-fzk67         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 snapshot-controller-7d9fbc56b8-gk5gb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  local-path-storage          local-path-provisioner-648f6765c9-bmh8h      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9rb44               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node addons-714840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node addons-714840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m39s)  kubelet          Node addons-714840 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m31s                  kubelet          Node addons-714840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s                  kubelet          Node addons-714840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s                  kubelet          Node addons-714840 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m27s                  node-controller  Node addons-714840 event: Registered Node addons-714840 in Controller
	  Normal   NodeReady                103s                   kubelet          Node addons-714840 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014607] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.506888] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032735] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.832337] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.644621] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 1 09:37] hrtimer: interrupt took 44045431 ns
	[Nov 1 09:38] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Nov 1 09:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 09:47] overlayfs: idmapped layers are currently not supported
	[  +0.058238] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8] <==
	{"level":"warn","ts":"2025-11-01T09:47:59.957752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:47:59.979805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:47:59.990384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.012472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.029684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.039896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.057885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.077132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.095571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.113233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.131595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.156186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.177029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.194473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.216898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.340561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.358188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.417693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:00.494835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:16.097563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:16.141811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:38.497027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:38.520688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:38.554878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:38.588383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46968","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [cfb354a027b1d301bf9c0c79ff5672bb199d5061a790e46b5677aca8a8307135] <==
	2025/11/01 09:50:00 GCP Auth Webhook started!
	2025/11/01 09:50:03 Ready to marshal response ...
	2025/11/01 09:50:03 Ready to write response ...
	2025/11/01 09:50:03 Ready to marshal response ...
	2025/11/01 09:50:03 Ready to write response ...
	2025/11/01 09:50:03 Ready to marshal response ...
	2025/11/01 09:50:03 Ready to write response ...
	2025/11/01 09:50:23 Ready to marshal response ...
	2025/11/01 09:50:23 Ready to write response ...
	2025/11/01 09:50:25 Ready to marshal response ...
	2025/11/01 09:50:25 Ready to write response ...
	2025/11/01 09:50:25 Ready to marshal response ...
	2025/11/01 09:50:25 Ready to write response ...
	2025/11/01 09:50:33 Ready to marshal response ...
	2025/11/01 09:50:33 Ready to write response ...
	
	
	==> kernel <==
	 09:50:35 up  1:33,  0 user,  load average: 1.72, 2.71, 3.17
	Linux addons-714840 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e] <==
	I1101 09:48:43.641076       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:48:43.641134       1 metrics.go:72] Registering metrics
	I1101 09:48:43.641206       1 controller.go:711] "Syncing nftables rules"
	I1101 09:48:52.046533       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:48:52.046592       1 main.go:301] handling current node
	I1101 09:49:02.040360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:49:02.040431       1 main.go:301] handling current node
	I1101 09:49:12.041237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:49:12.041264       1 main.go:301] handling current node
	I1101 09:49:22.039811       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:49:22.039841       1 main.go:301] handling current node
	I1101 09:49:32.040702       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:49:32.040737       1 main.go:301] handling current node
	I1101 09:49:42.039838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:49:42.039872       1 main.go:301] handling current node
	I1101 09:49:52.040784       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:49:52.040817       1 main.go:301] handling current node
	I1101 09:50:02.040390       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:50:02.040426       1 main.go:301] handling current node
	I1101 09:50:12.042395       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:50:12.043459       1 main.go:301] handling current node
	I1101 09:50:22.044615       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:50:22.044652       1 main.go:301] handling current node
	I1101 09:50:32.040509       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:50:32.040545       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79] <==
	I1101 09:48:15.539273       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1101 09:48:15.667644       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.101.76.163"}
	W1101 09:48:16.096702       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:48:16.116751       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1101 09:48:18.468908       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.105.141"}
	W1101 09:48:38.492496       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:48:38.519169       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 09:48:38.553925       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 09:48:38.586955       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:48:52.430888       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.105.141:443: connect: connection refused
	E1101 09:48:52.430940       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.105.141:443: connect: connection refused" logger="UnhandledError"
	W1101 09:48:52.431410       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.105.141:443: connect: connection refused
	E1101 09:48:52.435494       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.105.141:443: connect: connection refused" logger="UnhandledError"
	W1101 09:48:52.520162       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.105.141:443: connect: connection refused
	E1101 09:48:52.520271       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.105.141:443: connect: connection refused" logger="UnhandledError"
	W1101 09:49:07.603743       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 09:49:07.603818       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 09:49:07.604848       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.26.83:443: connect: connection refused" logger="UnhandledError"
	E1101 09:49:07.607755       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.26.83:443: connect: connection refused" logger="UnhandledError"
	E1101 09:49:07.610846       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.26.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.26.83:443: connect: connection refused" logger="UnhandledError"
	I1101 09:49:07.756987       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:50:12.804404       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54872: use of closed network connection
	
	
	==> kube-controller-manager [17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd] <==
	I1101 09:48:08.520987       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:48:08.520995       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:48:08.530444       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-714840" podCIDRs=["10.244.0.0/24"]
	I1101 09:48:08.563944       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:48:08.564061       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:48:08.564167       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:48:08.564243       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-714840"
	I1101 09:48:08.564285       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:48:08.564321       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:48:08.564171       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:48:08.566192       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:48:08.566328       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:48:08.566912       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:48:08.569247       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:48:08.569279       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:48:08.569287       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1101 09:48:13.810570       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1101 09:48:38.476109       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:48:38.476274       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 09:48:38.476340       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:48:38.526040       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:48:38.538544       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 09:48:38.579374       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:48:38.639399       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:48:53.574641       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f] <==
	I1101 09:48:11.956541       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:48:12.068524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:48:12.191091       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:48:12.191131       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:48:12.191208       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:48:12.232336       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:48:12.232395       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:48:12.239325       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:48:12.239662       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:48:12.239680       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:48:12.248853       1 config.go:200] "Starting service config controller"
	I1101 09:48:12.248877       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:48:12.248894       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:48:12.248898       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:48:12.248915       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:48:12.248945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:48:12.249600       1 config.go:309] "Starting node config controller"
	I1101 09:48:12.249608       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:48:12.249618       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:48:12.351130       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:48:12.351174       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:48:12.351187       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce] <==
	I1101 09:48:01.633375       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:48:01.633418       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:48:01.636302       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1101 09:48:01.639789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:48:01.649406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:48:01.649657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:48:01.650675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:48:01.650808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:48:01.650920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:48:01.651046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:48:01.651690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:48:01.652795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:48:01.652953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:48:01.653502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:48:01.653559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:48:01.653657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:48:01.653657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:48:01.653791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:48:01.653847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:48:01.653893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:48:01.653920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:48:01.653998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:48:02.500646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:48:02.540755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1101 09:48:03.036608       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:50:32 addons-714840 kubelet[1270]: I1101 09:50:32.222425    1270 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd8856fe-807d-489d-9675-9adc9560e7ff-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe" (OuterVolumeSpecName: "data") pod "dd8856fe-807d-489d-9675-9adc9560e7ff" (UID: "dd8856fe-807d-489d-9675-9adc9560e7ff"). InnerVolumeSpecName "pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 09:50:32 addons-714840 kubelet[1270]: I1101 09:50:32.228146    1270 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8856fe-807d-489d-9675-9adc9560e7ff-kube-api-access-lt5pb" (OuterVolumeSpecName: "kube-api-access-lt5pb") pod "dd8856fe-807d-489d-9675-9adc9560e7ff" (UID: "dd8856fe-807d-489d-9675-9adc9560e7ff"). InnerVolumeSpecName "kube-api-access-lt5pb". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 09:50:32 addons-714840 kubelet[1270]: I1101 09:50:32.322981    1270 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/dd8856fe-807d-489d-9675-9adc9560e7ff-gcp-creds\") on node \"addons-714840\" DevicePath \"\""
	Nov 01 09:50:32 addons-714840 kubelet[1270]: I1101 09:50:32.323036    1270 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lt5pb\" (UniqueName: \"kubernetes.io/projected/dd8856fe-807d-489d-9675-9adc9560e7ff-kube-api-access-lt5pb\") on node \"addons-714840\" DevicePath \"\""
	Nov 01 09:50:32 addons-714840 kubelet[1270]: I1101 09:50:32.323049    1270 reconciler_common.go:299] "Volume detached for volume \"pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe\" (UniqueName: \"kubernetes.io/host-path/dd8856fe-807d-489d-9675-9adc9560e7ff-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe\") on node \"addons-714840\" DevicePath \"\""
	Nov 01 09:50:33 addons-714840 kubelet[1270]: I1101 09:50:33.100133    1270 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a435dd3e424ee09a79b8f8a889524934d6d90dcd355855b09f40a08d4aec1239"
	Nov 01 09:50:33 addons-714840 kubelet[1270]: E1101 09:50:33.107520    1270 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-714840\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-714840' and this object" podUID="dd8856fe-807d-489d-9675-9adc9560e7ff" pod="default/test-local-path"
	Nov 01 09:50:33 addons-714840 kubelet[1270]: I1101 09:50:33.239578    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/853fb37d-7b33-4721-8030-320d25f4c705-gcp-creds\") pod \"helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe\" (UID: \"853fb37d-7b33-4721-8030-320d25f4c705\") " pod="local-path-storage/helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe"
	Nov 01 09:50:33 addons-714840 kubelet[1270]: I1101 09:50:33.240097    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/853fb37d-7b33-4721-8030-320d25f4c705-script\") pod \"helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe\" (UID: \"853fb37d-7b33-4721-8030-320d25f4c705\") " pod="local-path-storage/helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe"
	Nov 01 09:50:33 addons-714840 kubelet[1270]: I1101 09:50:33.240201    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/853fb37d-7b33-4721-8030-320d25f4c705-data\") pod \"helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe\" (UID: \"853fb37d-7b33-4721-8030-320d25f4c705\") " pod="local-path-storage/helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe"
	Nov 01 09:50:33 addons-714840 kubelet[1270]: I1101 09:50:33.240308    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8t5\" (UniqueName: \"kubernetes.io/projected/853fb37d-7b33-4721-8030-320d25f4c705-kube-api-access-xj8t5\") pod \"helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe\" (UID: \"853fb37d-7b33-4721-8030-320d25f4c705\") " pod="local-path-storage/helper-pod-delete-pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe"
	Nov 01 09:50:34 addons-714840 kubelet[1270]: E1101 09:50:34.107600    1270 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-714840\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-714840' and this object" podUID="dd8856fe-807d-489d-9675-9adc9560e7ff" pod="default/test-local-path"
	Nov 01 09:50:34 addons-714840 kubelet[1270]: I1101 09:50:34.254192    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd8856fe-807d-489d-9675-9adc9560e7ff" path="/var/lib/kubelet/pods/dd8856fe-807d-489d-9675-9adc9560e7ff/volumes"
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.261178    1270 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/853fb37d-7b33-4721-8030-320d25f4c705-data\") pod \"853fb37d-7b33-4721-8030-320d25f4c705\" (UID: \"853fb37d-7b33-4721-8030-320d25f4c705\") "
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.261252    1270 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/853fb37d-7b33-4721-8030-320d25f4c705-script\") pod \"853fb37d-7b33-4721-8030-320d25f4c705\" (UID: \"853fb37d-7b33-4721-8030-320d25f4c705\") "
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.261286    1270 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj8t5\" (UniqueName: \"kubernetes.io/projected/853fb37d-7b33-4721-8030-320d25f4c705-kube-api-access-xj8t5\") pod \"853fb37d-7b33-4721-8030-320d25f4c705\" (UID: \"853fb37d-7b33-4721-8030-320d25f4c705\") "
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.261321    1270 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/853fb37d-7b33-4721-8030-320d25f4c705-gcp-creds\") pod \"853fb37d-7b33-4721-8030-320d25f4c705\" (UID: \"853fb37d-7b33-4721-8030-320d25f4c705\") "
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.261500    1270 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853fb37d-7b33-4721-8030-320d25f4c705-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "853fb37d-7b33-4721-8030-320d25f4c705" (UID: "853fb37d-7b33-4721-8030-320d25f4c705"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.261532    1270 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853fb37d-7b33-4721-8030-320d25f4c705-data" (OuterVolumeSpecName: "data") pod "853fb37d-7b33-4721-8030-320d25f4c705" (UID: "853fb37d-7b33-4721-8030-320d25f4c705"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.261848    1270 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853fb37d-7b33-4721-8030-320d25f4c705-script" (OuterVolumeSpecName: "script") pod "853fb37d-7b33-4721-8030-320d25f4c705" (UID: "853fb37d-7b33-4721-8030-320d25f4c705"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.269221    1270 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853fb37d-7b33-4721-8030-320d25f4c705-kube-api-access-xj8t5" (OuterVolumeSpecName: "kube-api-access-xj8t5") pod "853fb37d-7b33-4721-8030-320d25f4c705" (UID: "853fb37d-7b33-4721-8030-320d25f4c705"). InnerVolumeSpecName "kube-api-access-xj8t5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.362055    1270 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xj8t5\" (UniqueName: \"kubernetes.io/projected/853fb37d-7b33-4721-8030-320d25f4c705-kube-api-access-xj8t5\") on node \"addons-714840\" DevicePath \"\""
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.362095    1270 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/853fb37d-7b33-4721-8030-320d25f4c705-gcp-creds\") on node \"addons-714840\" DevicePath \"\""
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.362105    1270 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/853fb37d-7b33-4721-8030-320d25f4c705-data\") on node \"addons-714840\" DevicePath \"\""
	Nov 01 09:50:35 addons-714840 kubelet[1270]: I1101 09:50:35.362113    1270 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/853fb37d-7b33-4721-8030-320d25f4c705-script\") on node \"addons-714840\" DevicePath \"\""
	
	
	==> storage-provisioner [c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323] <==
	W1101 09:50:10.178833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:12.182041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:12.186716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:14.189851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:14.194348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:16.197644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:16.204485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:18.208136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:18.212894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:20.216171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:20.220786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:22.224046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:22.229130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:24.232441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:24.237067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:26.239971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:26.245752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:28.249354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:28.256766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:30.260847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:30.266367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:32.270150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:32.274606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:34.278723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:50:34.289565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-714840 -n addons-714840
helpers_test.go:269: (dbg) Run:  kubectl --context addons-714840 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-99jl2 ingress-nginx-admission-patch-8mgj2 registry-creds-764b6fb674-bnkwh
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-714840 describe pod ingress-nginx-admission-create-99jl2 ingress-nginx-admission-patch-8mgj2 registry-creds-764b6fb674-bnkwh
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-714840 describe pod ingress-nginx-admission-create-99jl2 ingress-nginx-admission-patch-8mgj2 registry-creds-764b6fb674-bnkwh: exit status 1 (82.618269ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-99jl2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8mgj2" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-bnkwh" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-714840 describe pod ingress-nginx-admission-create-99jl2 ingress-nginx-admission-patch-8mgj2 registry-creds-764b6fb674-bnkwh: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable headlamp --alsologtostderr -v=1: exit status 11 (260.548268ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:50:36.891573  302360 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:50:36.892678  302360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:36.892729  302360 out.go:374] Setting ErrFile to fd 2...
	I1101 09:50:36.892752  302360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:36.893110  302360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:50:36.893473  302360 mustload.go:66] Loading cluster: addons-714840
	I1101 09:50:36.893899  302360 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:36.893943  302360 addons.go:607] checking whether the cluster is paused
	I1101 09:50:36.894074  302360 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:36.894113  302360 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:50:36.894612  302360 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:50:36.913428  302360 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:36.913484  302360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:50:36.930431  302360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:50:37.035871  302360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:37.035955  302360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:37.067668  302360 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:50:37.067709  302360 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:50:37.067715  302360 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:50:37.067719  302360 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:50:37.067722  302360 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:50:37.067726  302360 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:50:37.067729  302360 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:50:37.067749  302360 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:50:37.067757  302360 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:50:37.067764  302360 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:50:37.067767  302360 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:50:37.067771  302360 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:50:37.067774  302360 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:50:37.067778  302360 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:50:37.067781  302360 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:50:37.067801  302360 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:50:37.067810  302360 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:50:37.067839  302360 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:50:37.067845  302360 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:50:37.067848  302360 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:50:37.067853  302360 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:50:37.067864  302360 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:50:37.067868  302360 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:50:37.067871  302360 cri.go:89] found id: ""
	I1101 09:50:37.067942  302360 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:50:37.084211  302360 out.go:203] 
	W1101 09:50:37.087181  302360 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:50:37.087210  302360 out.go:285] * 
	* 
	W1101 09:50:37.092167  302360 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:50:37.095183  302360 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-jlz98" [5e764620-a00f-4115-9f76-e8697fa50300] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003736628s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (383.436497ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:50:33.752399  301853 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:50:33.753375  301853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:33.753397  301853 out.go:374] Setting ErrFile to fd 2...
	I1101 09:50:33.753403  301853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:33.753696  301853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:50:33.754042  301853 mustload.go:66] Loading cluster: addons-714840
	I1101 09:50:33.754458  301853 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:33.754479  301853 addons.go:607] checking whether the cluster is paused
	I1101 09:50:33.754637  301853 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:33.754656  301853 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:50:33.755250  301853 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:50:33.782430  301853 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:33.782494  301853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:50:33.812100  301853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:50:33.936354  301853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:33.936458  301853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:34.022148  301853 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:50:34.022250  301853 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:50:34.022284  301853 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:50:34.022307  301853 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:50:34.022342  301853 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:50:34.022378  301853 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:50:34.022396  301853 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:50:34.022420  301853 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:50:34.022438  301853 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:50:34.022466  301853 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:50:34.022492  301853 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:50:34.022519  301853 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:50:34.022549  301853 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:50:34.022580  301853 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:50:34.022665  301853 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:50:34.022689  301853 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:50:34.022729  301853 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:50:34.022766  301853 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:50:34.022792  301853 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:50:34.022811  301853 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:50:34.022850  301853 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:50:34.022885  301853 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:50:34.022910  301853 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:50:34.022933  301853 cri.go:89] found id: ""
	I1101 09:50:34.023038  301853 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:50:34.042487  301853 out.go:203] 
	W1101 09:50:34.045374  301853 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:50:34.045415  301853 out.go:285] * 
	* 
	W1101 09:50:34.050453  301853 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:50:34.054499  301853 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-714840 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-714840 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc test-pvc -o jsonpath={.status.phase} -n default
2025/11/01 09:50:28 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-714840 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [dd8856fe-807d-489d-9675-9adc9560e7ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [dd8856fe-807d-489d-9675-9adc9560e7ff] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [dd8856fe-807d-489d-9675-9adc9560e7ff] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003206049s
addons_test.go:967: (dbg) Run:  kubectl --context addons-714840 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 ssh "cat /opt/local-path-provisioner/pvc-f4a736fb-de7a-40ec-9a00-89f46e291bfe_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-714840 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-714840 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (284.311289ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:50:33.179878  301739 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:50:33.181615  301739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:33.181634  301739 out.go:374] Setting ErrFile to fd 2...
	I1101 09:50:33.181641  301739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:33.182346  301739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:50:33.182853  301739 mustload.go:66] Loading cluster: addons-714840
	I1101 09:50:33.183615  301739 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:33.183640  301739 addons.go:607] checking whether the cluster is paused
	I1101 09:50:33.183822  301739 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:33.183840  301739 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:50:33.184595  301739 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:50:33.204352  301739 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:33.204417  301739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:50:33.221968  301739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:50:33.327754  301739 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:33.327847  301739 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:33.379090  301739 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:50:33.379170  301739 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:50:33.379192  301739 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:50:33.379214  301739 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:50:33.379248  301739 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:50:33.379277  301739 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:50:33.379303  301739 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:50:33.379336  301739 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:50:33.379356  301739 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:50:33.379393  301739 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:50:33.379422  301739 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:50:33.379440  301739 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:50:33.379461  301739 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:50:33.379504  301739 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:50:33.379524  301739 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:50:33.379555  301739 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:50:33.379596  301739 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:50:33.379618  301739 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:50:33.379638  301739 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:50:33.379676  301739 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:50:33.379701  301739 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:50:33.379721  301739 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:50:33.379750  301739 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:50:33.379774  301739 cri.go:89] found id: ""
	I1101 09:50:33.379863  301739 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:50:33.396513  301739 out.go:203] 
	W1101 09:50:33.400121  301739 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:50:33.400213  301739 out.go:285] * 
	* 
	W1101 09:50:33.405417  301739 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:50:33.409291  301739 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-2t6gg" [b0ac2900-1eee-465f-a19f-5eaefd1775e9] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006109111s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (267.910051ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:50:24.782899  301322 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:50:24.783841  301322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:24.783851  301322 out.go:374] Setting ErrFile to fd 2...
	I1101 09:50:24.783855  301322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:24.784210  301322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:50:24.784519  301322 mustload.go:66] Loading cluster: addons-714840
	I1101 09:50:24.784904  301322 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:24.784958  301322 addons.go:607] checking whether the cluster is paused
	I1101 09:50:24.785078  301322 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:24.785089  301322 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:50:24.785660  301322 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:50:24.805748  301322 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:24.805814  301322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:50:24.825296  301322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:50:24.931377  301322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:24.931539  301322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:24.960642  301322 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:50:24.960709  301322 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:50:24.960728  301322 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:50:24.960749  301322 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:50:24.960768  301322 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:50:24.960798  301322 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:50:24.960823  301322 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:50:24.960844  301322 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:50:24.960871  301322 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:50:24.960905  301322 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:50:24.961007  301322 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:50:24.961019  301322 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:50:24.961023  301322 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:50:24.961026  301322 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:50:24.961030  301322 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:50:24.961034  301322 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:50:24.961038  301322 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:50:24.961041  301322 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:50:24.961045  301322 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:50:24.961048  301322 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:50:24.961055  301322 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:50:24.961058  301322 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:50:24.961061  301322 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:50:24.961064  301322 cri.go:89] found id: ""
	I1101 09:50:24.961136  301322 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:50:24.976413  301322 out.go:203] 
	W1101 09:50:24.979442  301322 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:50:24.979472  301322 out.go:285] * 
	* 
	W1101 09:50:24.984477  301322 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:50:24.987662  301322 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9rb44" [0c9abd53-1d7b-4b67-8348-ac9104c47346] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004224975s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-714840 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-714840 addons disable yakd --alsologtostderr -v=1: exit status 11 (265.321363ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:50:19.505578  301230 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:50:19.506453  301230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:19.506468  301230 out.go:374] Setting ErrFile to fd 2...
	I1101 09:50:19.506474  301230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:50:19.506797  301230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:50:19.507115  301230 mustload.go:66] Loading cluster: addons-714840
	I1101 09:50:19.507563  301230 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:19.507581  301230 addons.go:607] checking whether the cluster is paused
	I1101 09:50:19.507687  301230 config.go:182] Loaded profile config "addons-714840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:19.507719  301230 host.go:66] Checking if "addons-714840" exists ...
	I1101 09:50:19.508172  301230 cli_runner.go:164] Run: docker container inspect addons-714840 --format={{.State.Status}}
	I1101 09:50:19.527376  301230 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:19.527437  301230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-714840
	I1101 09:50:19.547463  301230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/addons-714840/id_rsa Username:docker}
	I1101 09:50:19.655579  301230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:19.655676  301230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:19.685236  301230 cri.go:89] found id: "10a1c7de04e0da6afb9660d7d3283c6f22abac3ca6968c64b014c58c9207aaa9"
	I1101 09:50:19.685306  301230 cri.go:89] found id: "4a127573889cd07b129acace79189c3c585b9ef5945cc241954d7b46ef59c90f"
	I1101 09:50:19.685320  301230 cri.go:89] found id: "0d38db82f09d9103dfe872432ce59c9c63768707723154d082fe5e59c82c5dca"
	I1101 09:50:19.685325  301230 cri.go:89] found id: "7dcafd9990f60b7a000fb9a1c9a587c09f69359051efd3f7cb99173b289872c5"
	I1101 09:50:19.685328  301230 cri.go:89] found id: "290dbe24a3813fe18fb563e17020411d4eed30aea615f2693f94bff4806beadd"
	I1101 09:50:19.685332  301230 cri.go:89] found id: "57fd4de0c99ca8c77425b8f7d6946c863f7ec62caf3e69c70d2c5d704800cd41"
	I1101 09:50:19.685335  301230 cri.go:89] found id: "57568bd94e7afb9ec39bd3f3f26841970fad056e3e3601d4b4ad8a89f53d3b5d"
	I1101 09:50:19.685338  301230 cri.go:89] found id: "8972f335d55fec5d9c1217d174b485b0121b29b0085dd41e7e87145715124c2b"
	I1101 09:50:19.685341  301230 cri.go:89] found id: "39741cf1952693dffd31433c36f6d21fc970c25069625237e687d6d0dfbc456c"
	I1101 09:50:19.685348  301230 cri.go:89] found id: "7a84a50fa7c2b3f9438984a2697ea3862f985c097665d460d9886ceda1800f74"
	I1101 09:50:19.685352  301230 cri.go:89] found id: "656c40399f18d4148aa409752213a3630ef6de2cb80617d19614c182a33b1fe6"
	I1101 09:50:19.685355  301230 cri.go:89] found id: "68903857276a86e44390e3d1e93b6a5cedea46429a93f973d61091d47f284c17"
	I1101 09:50:19.685359  301230 cri.go:89] found id: "a1a58b7ec669a5292c694f4cdc157b035c1d046121a513d99c51aaf8a2eb3567"
	I1101 09:50:19.685363  301230 cri.go:89] found id: "4fbf88d999b23ea12c2164c9434a1763a30fae3699d3760409d9414a7ff49628"
	I1101 09:50:19.685367  301230 cri.go:89] found id: "678a88e760bceadde9450890c14bd913a5ecd436e621d38d6bb454b6b9d98e40"
	I1101 09:50:19.685397  301230 cri.go:89] found id: "4e5de8a419785ee981fcd6d4b6944c7125de5e0bb1bc049878ef0a5063b0a0fc"
	I1101 09:50:19.685401  301230 cri.go:89] found id: "c0ddb9895a9b9f2f8fa690fac93015dd539b7793af07d2a939d4f92ccd1e6323"
	I1101 09:50:19.685406  301230 cri.go:89] found id: "5b14178d104613be41472c4f975aba504aed9df96676d3e23247be0ad3a0c99e"
	I1101 09:50:19.685421  301230 cri.go:89] found id: "6949baeb846a9b126c1f8bf2846a02d746aa021c6cffb63465c915533b49b55f"
	I1101 09:50:19.685424  301230 cri.go:89] found id: "a35a59e2848f6e4a40480ac4fb6699392f4cf979508e48376314802bab21be79"
	I1101 09:50:19.685430  301230 cri.go:89] found id: "15771f960cfb3f67b7df3c2af728f0ecf401eefb4967621a852b87a8a5fae1ce"
	I1101 09:50:19.685437  301230 cri.go:89] found id: "17dd29eab394daa288f96307bda50a25bbba4e817cb7c211523ea9a597b432cd"
	I1101 09:50:19.685440  301230 cri.go:89] found id: "5fabe274c82079cac54eefc0bfcb9623cd979f61144e48503e89a4fdf90fd0a8"
	I1101 09:50:19.685443  301230 cri.go:89] found id: ""
	I1101 09:50:19.685509  301230 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:50:19.701051  301230 out.go:203] 
	W1101 09:50:19.703975  301230 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:50:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:50:19.704007  301230 out.go:285] * 
	* 
	W1101 09:50:19.709207  301230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:50:19.712242  301230 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-714840 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-839033 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-839033 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-4prgw" [eef2e76e-2b0d-4647-9145-488ac3ab77c1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-839033 -n functional-839033
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-01 10:07:25.015736231 +0000 UTC m=+1225.142188091
functional_test.go:1645: (dbg) Run:  kubectl --context functional-839033 describe po hello-node-connect-7d85dfc575-4prgw -n default
functional_test.go:1645: (dbg) kubectl --context functional-839033 describe po hello-node-connect-7d85dfc575-4prgw -n default:
Name:             hello-node-connect-7d85dfc575-4prgw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-839033/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:57:24 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zmp7t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zmp7t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-4prgw to functional-839033
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-839033 logs hello-node-connect-7d85dfc575-4prgw -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-839033 logs hello-node-connect-7d85dfc575-4prgw -n default: exit status 1 (109.038904ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-4prgw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-839033 logs hello-node-connect-7d85dfc575-4prgw -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-839033 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-4prgw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-839033/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:57:24 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zmp7t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zmp7t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-4prgw to functional-839033
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-839033 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-839033 logs -l app=hello-node-connect: exit status 1 (85.551251ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-4prgw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-839033 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-839033 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.170.178
IPs:                      10.110.170.178
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32265/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-839033
helpers_test.go:243: (dbg) docker inspect functional-839033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a9efd8ec5a9808a8cf77ef6f08cddc6eef1330b07e469c4f330c99d6541dae8",
	        "Created": "2025-11-01T09:54:29.000596055Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309882,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:54:29.065864035Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/0a9efd8ec5a9808a8cf77ef6f08cddc6eef1330b07e469c4f330c99d6541dae8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a9efd8ec5a9808a8cf77ef6f08cddc6eef1330b07e469c4f330c99d6541dae8/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a9efd8ec5a9808a8cf77ef6f08cddc6eef1330b07e469c4f330c99d6541dae8/hosts",
	        "LogPath": "/var/lib/docker/containers/0a9efd8ec5a9808a8cf77ef6f08cddc6eef1330b07e469c4f330c99d6541dae8/0a9efd8ec5a9808a8cf77ef6f08cddc6eef1330b07e469c4f330c99d6541dae8-json.log",
	        "Name": "/functional-839033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-839033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-839033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0a9efd8ec5a9808a8cf77ef6f08cddc6eef1330b07e469c4f330c99d6541dae8",
	                "LowerDir": "/var/lib/docker/overlay2/737a668cd3402fbc0a8ba7f1dda7643d6c018a12dcaf638893ba24a36410162e-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/737a668cd3402fbc0a8ba7f1dda7643d6c018a12dcaf638893ba24a36410162e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/737a668cd3402fbc0a8ba7f1dda7643d6c018a12dcaf638893ba24a36410162e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/737a668cd3402fbc0a8ba7f1dda7643d6c018a12dcaf638893ba24a36410162e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-839033",
	                "Source": "/var/lib/docker/volumes/functional-839033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-839033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-839033",
	                "name.minikube.sigs.k8s.io": "functional-839033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "66a73e58c58f7c9db895677f357fc2b585e989eb879575a400d4888d447ba34b",
	            "SandboxKey": "/var/run/docker/netns/66a73e58c58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-839033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:d6:c1:39:8b:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd3d0b9897dc2d2238d18dafab8ae2eca45cc368a83871a255a7a950c6b20a94",
	                    "EndpointID": "f43c1ce0bb128952e2a226ba5acaad46b491268a2836742efd04ee9508e2c30c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-839033",
	                        "0a9efd8ec5a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-839033 -n functional-839033
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-839033 logs -n 25: (1.539375197s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-839033 ssh sudo cat /etc/ssl/certs/294288.pem                                                                                                  │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ image   │ functional-839033 image load --daemon kicbase/echo-server:functional-839033 --alsologtostderr                                                             │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ ssh     │ functional-839033 ssh sudo cat /usr/share/ca-certificates/294288.pem                                                                                      │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ ssh     │ functional-839033 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ image   │ functional-839033 image ls                                                                                                                                │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ ssh     │ functional-839033 ssh sudo cat /etc/ssl/certs/2942882.pem                                                                                                 │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ image   │ functional-839033 image load --daemon kicbase/echo-server:functional-839033 --alsologtostderr                                                             │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ ssh     │ functional-839033 ssh sudo cat /usr/share/ca-certificates/2942882.pem                                                                                     │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ ssh     │ functional-839033 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ image   │ functional-839033 image ls                                                                                                                                │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ ssh     │ functional-839033 ssh sudo cat /etc/test/nested/copy/294288/hosts                                                                                         │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ image   │ functional-839033 image load --daemon kicbase/echo-server:functional-839033 --alsologtostderr                                                             │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ image   │ functional-839033 image ls                                                                                                                                │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ image   │ functional-839033 image save kicbase/echo-server:functional-839033 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ image   │ functional-839033 image rm kicbase/echo-server:functional-839033 --alsologtostderr                                                                        │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ ssh     │ functional-839033 ssh echo hello                                                                                                                          │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ image   │ functional-839033 image ls                                                                                                                                │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ ssh     │ functional-839033 ssh cat /etc/hostname                                                                                                                   │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ image   │ functional-839033 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ tunnel  │ functional-839033 tunnel --alsologtostderr                                                                                                                │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ tunnel  │ functional-839033 tunnel --alsologtostderr                                                                                                                │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ image   │ functional-839033 image save --daemon kicbase/echo-server:functional-839033 --alsologtostderr                                                             │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ tunnel  │ functional-839033 tunnel --alsologtostderr                                                                                                                │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ functional-839033 addons list                                                                                                                             │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ addons  │ functional-839033 addons list -o json                                                                                                                     │ functional-839033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:56:21
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:56:21.871263  314033 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:56:21.871432  314033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:56:21.871437  314033 out.go:374] Setting ErrFile to fd 2...
	I1101 09:56:21.871441  314033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:56:21.871698  314033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:56:21.872049  314033 out.go:368] Setting JSON to false
	I1101 09:56:21.872958  314033 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5934,"bootTime":1761985048,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 09:56:21.873018  314033 start.go:143] virtualization:  
	I1101 09:56:21.876505  314033 out.go:179] * [functional-839033] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:56:21.879476  314033 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:56:21.879618  314033 notify.go:221] Checking for updates...
	I1101 09:56:21.885091  314033 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:56:21.887990  314033 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 09:56:21.890877  314033 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 09:56:21.893830  314033 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:56:21.896684  314033 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:56:21.900106  314033 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:56:21.900191  314033 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:56:21.948125  314033 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:56:21.948259  314033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:56:22.007799  314033 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-01 09:56:21.994173579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:56:22.007896  314033 docker.go:319] overlay module found
	I1101 09:56:22.011089  314033 out.go:179] * Using the docker driver based on existing profile
	I1101 09:56:22.013983  314033 start.go:309] selected driver: docker
	I1101 09:56:22.013994  314033 start.go:930] validating driver "docker" against &{Name:functional-839033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-839033 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:56:22.014101  314033 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:56:22.014200  314033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:56:22.078094  314033 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-01 09:56:22.069182328 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:56:22.078532  314033 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:56:22.078558  314033 cni.go:84] Creating CNI manager for ""
	I1101 09:56:22.078610  314033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:56:22.078654  314033 start.go:353] cluster config:
	{Name:functional-839033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-839033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:56:22.081935  314033 out.go:179] * Starting "functional-839033" primary control-plane node in "functional-839033" cluster
	I1101 09:56:22.084879  314033 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:56:22.088094  314033 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:56:22.090962  314033 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:56:22.091009  314033 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:56:22.091018  314033 cache.go:59] Caching tarball of preloaded images
	I1101 09:56:22.091076  314033 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:56:22.091104  314033 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:56:22.091112  314033 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:56:22.091231  314033 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/config.json ...
	I1101 09:56:22.111826  314033 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:56:22.111838  314033 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:56:22.111856  314033 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:56:22.111877  314033 start.go:360] acquireMachinesLock for functional-839033: {Name:mk6bc913f61b48ebc36f4dd1070f07e0aa249f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:56:22.111940  314033 start.go:364] duration metric: took 44.177µs to acquireMachinesLock for "functional-839033"
	I1101 09:56:22.111959  314033 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:56:22.111963  314033 fix.go:54] fixHost starting: 
	I1101 09:56:22.112222  314033 cli_runner.go:164] Run: docker container inspect functional-839033 --format={{.State.Status}}
	I1101 09:56:22.129645  314033 fix.go:112] recreateIfNeeded on functional-839033: state=Running err=<nil>
	W1101 09:56:22.129675  314033 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:56:22.132968  314033 out.go:252] * Updating the running docker "functional-839033" container ...
	I1101 09:56:22.132991  314033 machine.go:94] provisionDockerMachine start ...
	I1101 09:56:22.133081  314033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:56:22.151114  314033 main.go:143] libmachine: Using SSH client type: native
	I1101 09:56:22.151434  314033 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1101 09:56:22.151441  314033 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:56:22.300541  314033 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-839033
	
	I1101 09:56:22.300556  314033 ubuntu.go:182] provisioning hostname "functional-839033"
	I1101 09:56:22.300619  314033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:56:22.319089  314033 main.go:143] libmachine: Using SSH client type: native
	I1101 09:56:22.319398  314033 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1101 09:56:22.319407  314033 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-839033 && echo "functional-839033" | sudo tee /etc/hostname
	I1101 09:56:22.477972  314033 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-839033
	
	I1101 09:56:22.478044  314033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:56:22.497105  314033 main.go:143] libmachine: Using SSH client type: native
	I1101 09:56:22.497401  314033 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1101 09:56:22.497420  314033 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-839033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-839033/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-839033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:56:22.649986  314033 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:56:22.650002  314033 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 09:56:22.650024  314033 ubuntu.go:190] setting up certificates
	I1101 09:56:22.650042  314033 provision.go:84] configureAuth start
	I1101 09:56:22.650102  314033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-839033
	I1101 09:56:22.668525  314033 provision.go:143] copyHostCerts
	I1101 09:56:22.668595  314033 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 09:56:22.668609  314033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 09:56:22.668682  314033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 09:56:22.668779  314033 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 09:56:22.668784  314033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 09:56:22.668807  314033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 09:56:22.668852  314033 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 09:56:22.668856  314033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 09:56:22.668875  314033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 09:56:22.668917  314033 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.functional-839033 san=[127.0.0.1 192.168.49.2 functional-839033 localhost minikube]
	I1101 09:56:23.211735  314033 provision.go:177] copyRemoteCerts
	I1101 09:56:23.211790  314033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:56:23.211836  314033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:56:23.229393  314033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
	I1101 09:56:23.340771  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:56:23.358403  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:56:23.377052  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:56:23.395428  314033 provision.go:87] duration metric: took 745.356842ms to configureAuth
	I1101 09:56:23.395445  314033 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:56:23.395685  314033 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:56:23.395777  314033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:56:23.413068  314033 main.go:143] libmachine: Using SSH client type: native
	I1101 09:56:23.413386  314033 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1101 09:56:23.413398  314033 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:56:28.780242  314033 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:56:28.780254  314033 machine.go:97] duration metric: took 6.647256476s to provisionDockerMachine
	I1101 09:56:28.780263  314033 start.go:293] postStartSetup for "functional-839033" (driver="docker")
	I1101 09:56:28.780273  314033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:56:28.780341  314033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:56:28.780381  314033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:56:28.798924  314033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
	I1101 09:56:28.900374  314033 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:56:28.903601  314033 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:56:28.903620  314033 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:56:28.903630  314033 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 09:56:28.903686  314033 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 09:56:28.903768  314033 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 09:56:28.903842  314033 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/test/nested/copy/294288/hosts -> hosts in /etc/test/nested/copy/294288
	I1101 09:56:28.903886  314033 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/294288
	I1101 09:56:28.911120  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 09:56:28.927782  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/test/nested/copy/294288/hosts --> /etc/test/nested/copy/294288/hosts (40 bytes)
	I1101 09:56:28.944890  314033 start.go:296] duration metric: took 164.61232ms for postStartSetup
	I1101 09:56:28.945034  314033 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:56:28.945088  314033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:56:28.962762  314033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
	I1101 09:56:29.062362  314033 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:56:29.067698  314033 fix.go:56] duration metric: took 6.955727987s for fixHost
	I1101 09:56:29.067713  314033 start.go:83] releasing machines lock for "functional-839033", held for 6.955765385s
	I1101 09:56:29.067781  314033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-839033
	I1101 09:56:29.083972  314033 ssh_runner.go:195] Run: cat /version.json
	I1101 09:56:29.084022  314033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:56:29.084308  314033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:56:29.084355  314033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:56:29.104081  314033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
	I1101 09:56:29.113942  314033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
	I1101 09:56:29.216762  314033 ssh_runner.go:195] Run: systemctl --version
	I1101 09:56:29.318763  314033 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:56:29.357128  314033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:56:29.361944  314033 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:56:29.362006  314033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:56:29.369716  314033 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:56:29.369730  314033 start.go:496] detecting cgroup driver to use...
	I1101 09:56:29.369760  314033 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:56:29.369803  314033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:56:29.385318  314033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:56:29.398184  314033 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:56:29.398244  314033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:56:29.413922  314033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:56:29.426898  314033 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:56:29.565758  314033 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:56:29.704967  314033 docker.go:234] disabling docker service ...
	I1101 09:56:29.705031  314033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:56:29.720613  314033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:56:29.733656  314033 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:56:29.865619  314033 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:56:30.025372  314033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:56:30.078555  314033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:56:30.098030  314033 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:56:30.098092  314033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:56:30.108488  314033 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:56:30.108573  314033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:56:30.119434  314033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:56:30.131847  314033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:56:30.142839  314033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:56:30.151939  314033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:56:30.161957  314033 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:56:30.171226  314033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:56:30.180587  314033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:56:30.188438  314033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:56:30.196189  314033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:56:30.326213  314033 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:56:37.094122  314033 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.767875231s)
	I1101 09:56:37.094139  314033 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:56:37.094190  314033 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:56:37.101481  314033 start.go:564] Will wait 60s for crictl version
	I1101 09:56:37.101549  314033 ssh_runner.go:195] Run: which crictl
	I1101 09:56:37.105034  314033 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:56:37.133733  314033 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:56:37.133814  314033 ssh_runner.go:195] Run: crio --version
	I1101 09:56:37.162035  314033 ssh_runner.go:195] Run: crio --version
	I1101 09:56:37.196342  314033 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:56:37.199432  314033 cli_runner.go:164] Run: docker network inspect functional-839033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:56:37.215269  314033 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:56:37.222438  314033 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1101 09:56:37.225366  314033 kubeadm.go:884] updating cluster {Name:functional-839033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-839033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:56:37.225482  314033 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:56:37.225553  314033 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:56:37.258312  314033 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:56:37.258323  314033 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:56:37.258378  314033 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:56:37.283937  314033 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:56:37.283948  314033 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:56:37.283954  314033 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1101 09:56:37.284069  314033 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-839033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-839033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:56:37.284159  314033 ssh_runner.go:195] Run: crio config
	I1101 09:56:37.346675  314033 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1101 09:56:37.346730  314033 cni.go:84] Creating CNI manager for ""
	I1101 09:56:37.346737  314033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:56:37.346750  314033 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:56:37.346772  314033 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-839033 NodeName:functional-839033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:56:37.346899  314033 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-839033"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:56:37.346977  314033 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:56:37.354971  314033 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:56:37.355042  314033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:56:37.362683  314033 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 09:56:37.375494  314033 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:56:37.388214  314033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1101 09:56:37.401068  314033 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:56:37.404913  314033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:56:37.532958  314033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:56:37.547829  314033 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033 for IP: 192.168.49.2
	I1101 09:56:37.547839  314033 certs.go:195] generating shared ca certs ...
	I1101 09:56:37.547854  314033 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:56:37.547988  314033 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 09:56:37.548023  314033 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 09:56:37.548029  314033 certs.go:257] generating profile certs ...
	I1101 09:56:37.548113  314033 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.key
	I1101 09:56:37.548163  314033 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/apiserver.key.39f0bf6a
	I1101 09:56:37.548202  314033 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/proxy-client.key
	I1101 09:56:37.548315  314033 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 09:56:37.548348  314033 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 09:56:37.548355  314033 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:56:37.548378  314033 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:56:37.548397  314033 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:56:37.548422  314033 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 09:56:37.548462  314033 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 09:56:37.549154  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:56:37.567560  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:56:37.585081  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:56:37.602110  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:56:37.619181  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:56:37.635990  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:56:37.653097  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:56:37.670301  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:56:37.687768  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 09:56:37.705251  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 09:56:37.722226  314033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:56:37.739958  314033 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:56:37.753020  314033 ssh_runner.go:195] Run: openssl version
	I1101 09:56:37.759494  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 09:56:37.768172  314033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 09:56:37.771954  314033 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 09:56:37.772010  314033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 09:56:37.818360  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:56:37.826586  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:56:37.835381  314033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:56:37.839288  314033 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:56:37.839345  314033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:56:37.880656  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:56:37.888790  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 09:56:37.897094  314033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 09:56:37.900687  314033 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 09:56:37.900744  314033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 09:56:37.941708  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 09:56:37.949677  314033 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:56:37.953302  314033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:56:37.994542  314033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:56:38.038708  314033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:56:38.086037  314033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:56:38.127377  314033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:56:38.168225  314033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:56:38.209162  314033 kubeadm.go:401] StartCluster: {Name:functional-839033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-839033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:56:38.209238  314033 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:56:38.209301  314033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:56:38.238616  314033 cri.go:89] found id: "186dffdc84525c39ce066399bf6f59c88cc80a5566fc5eb01ee87408fe93780f"
	I1101 09:56:38.238627  314033 cri.go:89] found id: "5b137764de73916eb6c78bc20a157c135eb998beefd7cc8c6d9bca21a9a1de2f"
	I1101 09:56:38.238630  314033 cri.go:89] found id: "821d378330b2e5f88f8eb2a1a9e8823b6ff70cb3283a23966454b05f6143741e"
	I1101 09:56:38.238633  314033 cri.go:89] found id: "e42de1c54efa88811aa7462431f1d784f8d2f5cfc535305de444c58a572b4225"
	I1101 09:56:38.238636  314033 cri.go:89] found id: "4384f1c383464764bc763d31c520be563d2c9486a163e2dfbbfefc9dc9d676c3"
	I1101 09:56:38.238639  314033 cri.go:89] found id: "8983a5a37fc6dbbc399f4dd6a26d3f4139deb5284d4e6297f9c98cc0b042307a"
	I1101 09:56:38.238642  314033 cri.go:89] found id: "aaa4f4b1b02a84ef1fb671ccf5c7867210146238517165d4f4c6826c8466897c"
	I1101 09:56:38.238644  314033 cri.go:89] found id: "1f3b78decd05b7af9e8fcadb0df3bf154e56e65a4ef6993bd03be4de9b420555"
	I1101 09:56:38.238647  314033 cri.go:89] found id: "157a77cd595a20c64f11efba5d92dbcb0b8e7e7592516f2def48bdb8ee42aaee"
	I1101 09:56:38.238658  314033 cri.go:89] found id: "baf16ead6b72a307cfca861e8bc21d6349e1f1ba7bfc40a2bad265ea062cfad6"
	I1101 09:56:38.238661  314033 cri.go:89] found id: "3049d830c2267d4941f100f1f47275c98b48063447a3389fd110d7da9cccb511"
	I1101 09:56:38.238673  314033 cri.go:89] found id: "c1294cb350a71922e93f0a6b9f76bbcc2830efcfa508b060de7bd5901b16cdcf"
	I1101 09:56:38.238675  314033 cri.go:89] found id: "6085f4e71b5abd0c213463692ef8b3ac4154264dfbb672da36f92b4ccfe7edcc"
	I1101 09:56:38.238678  314033 cri.go:89] found id: "003188692aa0013ebc737208d5b8b7f1eaab2e9d74bd9dee020016592157d7a3"
	I1101 09:56:38.238689  314033 cri.go:89] found id: "45a50fbe48abf0c4bf74805fe5c93d59a66638f7b42846b853fa1cba7e1c090c"
	I1101 09:56:38.238694  314033 cri.go:89] found id: "cf3464658e3bd6038b81b1cc9e5810b9921d45ad477e96339077b235feaf72a5"
	I1101 09:56:38.238700  314033 cri.go:89] found id: ""
	I1101 09:56:38.238763  314033 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:56:38.249602  314033 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:56:38Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:56:38.249677  314033 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:56:38.257348  314033 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:56:38.257358  314033 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:56:38.257409  314033 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:56:38.264750  314033 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:56:38.265327  314033 kubeconfig.go:125] found "functional-839033" server: "https://192.168.49.2:8441"
	I1101 09:56:38.266683  314033 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:56:38.276369  314033 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-01 09:54:38.836787140 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-01 09:56:37.393661945 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1101 09:56:38.276379  314033 kubeadm.go:1161] stopping kube-system containers ...
	I1101 09:56:38.276388  314033 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 09:56:38.276445  314033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:56:38.313113  314033 cri.go:89] found id: "186dffdc84525c39ce066399bf6f59c88cc80a5566fc5eb01ee87408fe93780f"
	I1101 09:56:38.313124  314033 cri.go:89] found id: "5b137764de73916eb6c78bc20a157c135eb998beefd7cc8c6d9bca21a9a1de2f"
	I1101 09:56:38.313130  314033 cri.go:89] found id: "821d378330b2e5f88f8eb2a1a9e8823b6ff70cb3283a23966454b05f6143741e"
	I1101 09:56:38.313133  314033 cri.go:89] found id: "e42de1c54efa88811aa7462431f1d784f8d2f5cfc535305de444c58a572b4225"
	I1101 09:56:38.313135  314033 cri.go:89] found id: "4384f1c383464764bc763d31c520be563d2c9486a163e2dfbbfefc9dc9d676c3"
	I1101 09:56:38.313138  314033 cri.go:89] found id: "8983a5a37fc6dbbc399f4dd6a26d3f4139deb5284d4e6297f9c98cc0b042307a"
	I1101 09:56:38.313140  314033 cri.go:89] found id: "aaa4f4b1b02a84ef1fb671ccf5c7867210146238517165d4f4c6826c8466897c"
	I1101 09:56:38.313142  314033 cri.go:89] found id: "1f3b78decd05b7af9e8fcadb0df3bf154e56e65a4ef6993bd03be4de9b420555"
	I1101 09:56:38.313145  314033 cri.go:89] found id: "157a77cd595a20c64f11efba5d92dbcb0b8e7e7592516f2def48bdb8ee42aaee"
	I1101 09:56:38.313152  314033 cri.go:89] found id: "baf16ead6b72a307cfca861e8bc21d6349e1f1ba7bfc40a2bad265ea062cfad6"
	I1101 09:56:38.313164  314033 cri.go:89] found id: "3049d830c2267d4941f100f1f47275c98b48063447a3389fd110d7da9cccb511"
	I1101 09:56:38.313166  314033 cri.go:89] found id: "c1294cb350a71922e93f0a6b9f76bbcc2830efcfa508b060de7bd5901b16cdcf"
	I1101 09:56:38.313168  314033 cri.go:89] found id: "6085f4e71b5abd0c213463692ef8b3ac4154264dfbb672da36f92b4ccfe7edcc"
	I1101 09:56:38.313170  314033 cri.go:89] found id: "003188692aa0013ebc737208d5b8b7f1eaab2e9d74bd9dee020016592157d7a3"
	I1101 09:56:38.313173  314033 cri.go:89] found id: "45a50fbe48abf0c4bf74805fe5c93d59a66638f7b42846b853fa1cba7e1c090c"
	I1101 09:56:38.313177  314033 cri.go:89] found id: "cf3464658e3bd6038b81b1cc9e5810b9921d45ad477e96339077b235feaf72a5"
	I1101 09:56:38.313179  314033 cri.go:89] found id: ""
	I1101 09:56:38.313183  314033 cri.go:252] Stopping containers: [186dffdc84525c39ce066399bf6f59c88cc80a5566fc5eb01ee87408fe93780f 5b137764de73916eb6c78bc20a157c135eb998beefd7cc8c6d9bca21a9a1de2f 821d378330b2e5f88f8eb2a1a9e8823b6ff70cb3283a23966454b05f6143741e e42de1c54efa88811aa7462431f1d784f8d2f5cfc535305de444c58a572b4225 4384f1c383464764bc763d31c520be563d2c9486a163e2dfbbfefc9dc9d676c3 8983a5a37fc6dbbc399f4dd6a26d3f4139deb5284d4e6297f9c98cc0b042307a aaa4f4b1b02a84ef1fb671ccf5c7867210146238517165d4f4c6826c8466897c 1f3b78decd05b7af9e8fcadb0df3bf154e56e65a4ef6993bd03be4de9b420555 157a77cd595a20c64f11efba5d92dbcb0b8e7e7592516f2def48bdb8ee42aaee baf16ead6b72a307cfca861e8bc21d6349e1f1ba7bfc40a2bad265ea062cfad6 3049d830c2267d4941f100f1f47275c98b48063447a3389fd110d7da9cccb511 c1294cb350a71922e93f0a6b9f76bbcc2830efcfa508b060de7bd5901b16cdcf 6085f4e71b5abd0c213463692ef8b3ac4154264dfbb672da36f92b4ccfe7edcc 003188692aa0013ebc737208d5b8b7f1eaab2e9d74bd9dee020016592157d7a3 45a50fbe48abf0c4bf74805fe5c93d59a66638f7b
42846b853fa1cba7e1c090c cf3464658e3bd6038b81b1cc9e5810b9921d45ad477e96339077b235feaf72a5]
	I1101 09:56:38.313242  314033 ssh_runner.go:195] Run: which crictl
	I1101 09:56:38.321929  314033 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 186dffdc84525c39ce066399bf6f59c88cc80a5566fc5eb01ee87408fe93780f 5b137764de73916eb6c78bc20a157c135eb998beefd7cc8c6d9bca21a9a1de2f 821d378330b2e5f88f8eb2a1a9e8823b6ff70cb3283a23966454b05f6143741e e42de1c54efa88811aa7462431f1d784f8d2f5cfc535305de444c58a572b4225 4384f1c383464764bc763d31c520be563d2c9486a163e2dfbbfefc9dc9d676c3 8983a5a37fc6dbbc399f4dd6a26d3f4139deb5284d4e6297f9c98cc0b042307a aaa4f4b1b02a84ef1fb671ccf5c7867210146238517165d4f4c6826c8466897c 1f3b78decd05b7af9e8fcadb0df3bf154e56e65a4ef6993bd03be4de9b420555 157a77cd595a20c64f11efba5d92dbcb0b8e7e7592516f2def48bdb8ee42aaee baf16ead6b72a307cfca861e8bc21d6349e1f1ba7bfc40a2bad265ea062cfad6 3049d830c2267d4941f100f1f47275c98b48063447a3389fd110d7da9cccb511 c1294cb350a71922e93f0a6b9f76bbcc2830efcfa508b060de7bd5901b16cdcf 6085f4e71b5abd0c213463692ef8b3ac4154264dfbb672da36f92b4ccfe7edcc 003188692aa0013ebc737208d5b8b7f1eaab2e9d74bd9dee020016592157d7a3 45a50f
be48abf0c4bf74805fe5c93d59a66638f7b42846b853fa1cba7e1c090c cf3464658e3bd6038b81b1cc9e5810b9921d45ad477e96339077b235feaf72a5
	I1101 09:56:38.608795  314033 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 09:56:38.767719  314033 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:56:38.778194  314033 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov  1 09:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Nov  1 09:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov  1 09:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Nov  1 09:54 /etc/kubernetes/scheduler.conf
	
	I1101 09:56:38.778250  314033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1101 09:56:38.791094  314033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1101 09:56:38.802965  314033 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:56:38.803032  314033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:56:38.814168  314033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1101 09:56:38.826173  314033 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:56:38.826228  314033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:56:38.837399  314033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1101 09:56:38.853417  314033 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:56:38.853473  314033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:56:38.862068  314033 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:56:38.874306  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:56:38.960552  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:56:40.673148  314033 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.712573165s)
	I1101 09:56:40.673220  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:56:41.070233  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:56:41.223250  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:56:41.369433  314033 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:56:41.369499  314033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:56:41.385118  314033 api_server.go:72] duration metric: took 15.695828ms to wait for apiserver process to appear ...
	I1101 09:56:41.385132  314033 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:56:41.385150  314033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:56:42.586653  314033 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:56:42.586675  314033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:56:42.586686  314033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:56:42.678996  314033 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:56:42.679017  314033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:56:42.886217  314033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:56:42.896346  314033 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:56:42.896365  314033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:56:43.386001  314033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:56:43.407711  314033 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:56:43.407732  314033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:56:43.885250  314033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:56:46.591044  314033 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:56:46.591060  314033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:56:46.591072  314033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:56:46.636588  314033 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:56:46.636602  314033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:56:46.886229  314033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:56:46.894651  314033 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:56:46.894668  314033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:56:47.386015  314033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:56:47.396423  314033 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:56:47.396446  314033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:56:47.886142  314033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:56:47.894374  314033 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1101 09:56:47.908602  314033 api_server.go:141] control plane version: v1.34.1
	I1101 09:56:47.908618  314033 api_server.go:131] duration metric: took 6.523480777s to wait for apiserver health ...
	I1101 09:56:47.908626  314033 cni.go:84] Creating CNI manager for ""
	I1101 09:56:47.908631  314033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:56:47.912277  314033 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:56:47.915354  314033 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:56:47.919613  314033 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:56:47.919624  314033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:56:47.932611  314033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:56:48.360634  314033 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:56:48.364352  314033 system_pods.go:59] 8 kube-system pods found
	I1101 09:56:48.364376  314033 system_pods.go:61] "coredns-66bc5c9577-r9l4n" [e940aacf-dc09-4c31-b24f-6d361deb0321] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:48.364382  314033 system_pods.go:61] "etcd-functional-839033" [593545b7-55ef-43ab-8ab5-8c6dac587f94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:56:48.364388  314033 system_pods.go:61] "kindnet-9849s" [bed7295e-331a-4f5e-b5cb-6e313c22a98f] Running
	I1101 09:56:48.364392  314033 system_pods.go:61] "kube-apiserver-functional-839033" [b2eac7bb-0760-4a32-817e-db3aac79f7a5] Pending
	I1101 09:56:48.364396  314033 system_pods.go:61] "kube-controller-manager-functional-839033" [be6535a3-920c-4811-8a9e-b0f0e96044a3] Running
	I1101 09:56:48.364398  314033 system_pods.go:61] "kube-proxy-xwq72" [e6ac311f-af53-425b-bacf-52ee8aabc505] Running
	I1101 09:56:48.364404  314033 system_pods.go:61] "kube-scheduler-functional-839033" [64f82c79-0601-430b-b83b-bc6e5d5b8b98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:56:48.364409  314033 system_pods.go:61] "storage-provisioner" [862df5e5-49f4-4e53-af42-ab10150d24bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:48.364415  314033 system_pods.go:74] duration metric: took 3.769972ms to wait for pod list to return data ...
	I1101 09:56:48.364421  314033 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:56:48.367427  314033 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:56:48.367447  314033 node_conditions.go:123] node cpu capacity is 2
	I1101 09:56:48.367457  314033 node_conditions.go:105] duration metric: took 3.031391ms to run NodePressure ...
	I1101 09:56:48.367519  314033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:56:48.619883  314033 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 09:56:48.626022  314033 kubeadm.go:744] kubelet initialised
	I1101 09:56:48.626033  314033 kubeadm.go:745] duration metric: took 6.136669ms waiting for restarted kubelet to initialise ...
	I1101 09:56:48.626047  314033 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:56:48.636115  314033 ops.go:34] apiserver oom_adj: -16
	I1101 09:56:48.636127  314033 kubeadm.go:602] duration metric: took 10.378763295s to restartPrimaryControlPlane
	I1101 09:56:48.636135  314033 kubeadm.go:403] duration metric: took 10.426981529s to StartCluster
	I1101 09:56:48.636151  314033 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:56:48.636231  314033 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 09:56:48.636981  314033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:56:48.637476  314033 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:56:48.637253  314033 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:56:48.637587  314033 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:56:48.637765  314033 addons.go:70] Setting storage-provisioner=true in profile "functional-839033"
	I1101 09:56:48.637783  314033 addons.go:239] Setting addon storage-provisioner=true in "functional-839033"
	W1101 09:56:48.637788  314033 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:56:48.637807  314033 host.go:66] Checking if "functional-839033" exists ...
	I1101 09:56:48.637919  314033 addons.go:70] Setting default-storageclass=true in profile "functional-839033"
	I1101 09:56:48.637933  314033 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-839033"
	I1101 09:56:48.638267  314033 cli_runner.go:164] Run: docker container inspect functional-839033 --format={{.State.Status}}
	I1101 09:56:48.638597  314033 cli_runner.go:164] Run: docker container inspect functional-839033 --format={{.State.Status}}
	I1101 09:56:48.642957  314033 out.go:179] * Verifying Kubernetes components...
	I1101 09:56:48.645917  314033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:56:48.688912  314033 addons.go:239] Setting addon default-storageclass=true in "functional-839033"
	W1101 09:56:48.688975  314033 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:56:48.689012  314033 host.go:66] Checking if "functional-839033" exists ...
	I1101 09:56:48.689589  314033 cli_runner.go:164] Run: docker container inspect functional-839033 --format={{.State.Status}}
	I1101 09:56:48.696010  314033 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:56:48.698975  314033 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:56:48.698987  314033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:56:48.699069  314033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:56:48.734114  314033 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:56:48.734126  314033 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:56:48.734194  314033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:56:48.754582  314033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
	I1101 09:56:48.773068  314033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
	I1101 09:56:48.967499  314033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:56:48.971010  314033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:56:48.975411  314033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:56:48.994683  314033 node_ready.go:35] waiting up to 6m0s for node "functional-839033" to be "Ready" ...
	I1101 09:56:48.999413  314033 node_ready.go:49] node "functional-839033" is "Ready"
	I1101 09:56:48.999444  314033 node_ready.go:38] duration metric: took 4.727822ms for node "functional-839033" to be "Ready" ...
	I1101 09:56:48.999459  314033 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:56:48.999539  314033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:56:49.754565  314033 api_server.go:72] duration metric: took 1.116956296s to wait for apiserver process to appear ...
	I1101 09:56:49.754576  314033 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:56:49.754593  314033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:56:49.758076  314033 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 09:56:49.760975  314033 addons.go:515] duration metric: took 1.123374091s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 09:56:49.763995  314033 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1101 09:56:49.764995  314033 api_server.go:141] control plane version: v1.34.1
	I1101 09:56:49.765008  314033 api_server.go:131] duration metric: took 10.426965ms to wait for apiserver health ...
	I1101 09:56:49.765015  314033 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:56:49.768019  314033 system_pods.go:59] 8 kube-system pods found
	I1101 09:56:49.768039  314033 system_pods.go:61] "coredns-66bc5c9577-r9l4n" [e940aacf-dc09-4c31-b24f-6d361deb0321] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:49.768045  314033 system_pods.go:61] "etcd-functional-839033" [593545b7-55ef-43ab-8ab5-8c6dac587f94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:56:49.768052  314033 system_pods.go:61] "kindnet-9849s" [bed7295e-331a-4f5e-b5cb-6e313c22a98f] Running
	I1101 09:56:49.768056  314033 system_pods.go:61] "kube-apiserver-functional-839033" [b2eac7bb-0760-4a32-817e-db3aac79f7a5] Pending
	I1101 09:56:49.768060  314033 system_pods.go:61] "kube-controller-manager-functional-839033" [be6535a3-920c-4811-8a9e-b0f0e96044a3] Running
	I1101 09:56:49.768064  314033 system_pods.go:61] "kube-proxy-xwq72" [e6ac311f-af53-425b-bacf-52ee8aabc505] Running
	I1101 09:56:49.768070  314033 system_pods.go:61] "kube-scheduler-functional-839033" [64f82c79-0601-430b-b83b-bc6e5d5b8b98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:56:49.768075  314033 system_pods.go:61] "storage-provisioner" [862df5e5-49f4-4e53-af42-ab10150d24bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:49.768081  314033 system_pods.go:74] duration metric: took 3.060019ms to wait for pod list to return data ...
	I1101 09:56:49.768090  314033 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:56:49.770490  314033 default_sa.go:45] found service account: "default"
	I1101 09:56:49.770503  314033 default_sa.go:55] duration metric: took 2.40915ms for default service account to be created ...
	I1101 09:56:49.770511  314033 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:56:49.773581  314033 system_pods.go:86] 8 kube-system pods found
	I1101 09:56:49.773599  314033 system_pods.go:89] "coredns-66bc5c9577-r9l4n" [e940aacf-dc09-4c31-b24f-6d361deb0321] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:49.773606  314033 system_pods.go:89] "etcd-functional-839033" [593545b7-55ef-43ab-8ab5-8c6dac587f94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:56:49.773610  314033 system_pods.go:89] "kindnet-9849s" [bed7295e-331a-4f5e-b5cb-6e313c22a98f] Running
	I1101 09:56:49.773614  314033 system_pods.go:89] "kube-apiserver-functional-839033" [b2eac7bb-0760-4a32-817e-db3aac79f7a5] Pending
	I1101 09:56:49.773617  314033 system_pods.go:89] "kube-controller-manager-functional-839033" [be6535a3-920c-4811-8a9e-b0f0e96044a3] Running
	I1101 09:56:49.773620  314033 system_pods.go:89] "kube-proxy-xwq72" [e6ac311f-af53-425b-bacf-52ee8aabc505] Running
	I1101 09:56:49.773625  314033 system_pods.go:89] "kube-scheduler-functional-839033" [64f82c79-0601-430b-b83b-bc6e5d5b8b98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:56:49.773629  314033 system_pods.go:89] "storage-provisioner" [862df5e5-49f4-4e53-af42-ab10150d24bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:49.773653  314033 retry.go:31] will retry after 234.215012ms: missing components: kube-apiserver
	I1101 09:56:50.015140  314033 system_pods.go:86] 8 kube-system pods found
	I1101 09:56:50.015162  314033 system_pods.go:89] "coredns-66bc5c9577-r9l4n" [e940aacf-dc09-4c31-b24f-6d361deb0321] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:50.015170  314033 system_pods.go:89] "etcd-functional-839033" [593545b7-55ef-43ab-8ab5-8c6dac587f94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:56:50.015174  314033 system_pods.go:89] "kindnet-9849s" [bed7295e-331a-4f5e-b5cb-6e313c22a98f] Running
	I1101 09:56:50.015180  314033 system_pods.go:89] "kube-apiserver-functional-839033" [b2eac7bb-0760-4a32-817e-db3aac79f7a5] Pending
	I1101 09:56:50.015186  314033 system_pods.go:89] "kube-controller-manager-functional-839033" [be6535a3-920c-4811-8a9e-b0f0e96044a3] Running
	I1101 09:56:50.015189  314033 system_pods.go:89] "kube-proxy-xwq72" [e6ac311f-af53-425b-bacf-52ee8aabc505] Running
	I1101 09:56:50.015195  314033 system_pods.go:89] "kube-scheduler-functional-839033" [64f82c79-0601-430b-b83b-bc6e5d5b8b98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:56:50.015201  314033 system_pods.go:89] "storage-provisioner" [862df5e5-49f4-4e53-af42-ab10150d24bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:50.015215  314033 retry.go:31] will retry after 381.95306ms: missing components: kube-apiserver
	I1101 09:56:50.401230  314033 system_pods.go:86] 8 kube-system pods found
	I1101 09:56:50.401248  314033 system_pods.go:89] "coredns-66bc5c9577-r9l4n" [e940aacf-dc09-4c31-b24f-6d361deb0321] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:50.401255  314033 system_pods.go:89] "etcd-functional-839033" [593545b7-55ef-43ab-8ab5-8c6dac587f94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:56:50.401259  314033 system_pods.go:89] "kindnet-9849s" [bed7295e-331a-4f5e-b5cb-6e313c22a98f] Running
	I1101 09:56:50.401263  314033 system_pods.go:89] "kube-apiserver-functional-839033" [b2eac7bb-0760-4a32-817e-db3aac79f7a5] Pending
	I1101 09:56:50.401266  314033 system_pods.go:89] "kube-controller-manager-functional-839033" [be6535a3-920c-4811-8a9e-b0f0e96044a3] Running
	I1101 09:56:50.401268  314033 system_pods.go:89] "kube-proxy-xwq72" [e6ac311f-af53-425b-bacf-52ee8aabc505] Running
	I1101 09:56:50.401273  314033 system_pods.go:89] "kube-scheduler-functional-839033" [64f82c79-0601-430b-b83b-bc6e5d5b8b98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:56:50.401277  314033 system_pods.go:89] "storage-provisioner" [862df5e5-49f4-4e53-af42-ab10150d24bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:50.401291  314033 retry.go:31] will retry after 373.432257ms: missing components: kube-apiserver
	I1101 09:56:50.778855  314033 system_pods.go:86] 8 kube-system pods found
	I1101 09:56:50.778870  314033 system_pods.go:89] "coredns-66bc5c9577-r9l4n" [e940aacf-dc09-4c31-b24f-6d361deb0321] Running
	I1101 09:56:50.778883  314033 system_pods.go:89] "etcd-functional-839033" [593545b7-55ef-43ab-8ab5-8c6dac587f94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:56:50.778887  314033 system_pods.go:89] "kindnet-9849s" [bed7295e-331a-4f5e-b5cb-6e313c22a98f] Running
	I1101 09:56:50.778894  314033 system_pods.go:89] "kube-apiserver-functional-839033" [b2eac7bb-0760-4a32-817e-db3aac79f7a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:56:50.778899  314033 system_pods.go:89] "kube-controller-manager-functional-839033" [be6535a3-920c-4811-8a9e-b0f0e96044a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:56:50.778905  314033 system_pods.go:89] "kube-proxy-xwq72" [e6ac311f-af53-425b-bacf-52ee8aabc505] Running
	I1101 09:56:50.778910  314033 system_pods.go:89] "kube-scheduler-functional-839033" [64f82c79-0601-430b-b83b-bc6e5d5b8b98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:56:50.778916  314033 system_pods.go:89] "storage-provisioner" [862df5e5-49f4-4e53-af42-ab10150d24bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:50.778924  314033 system_pods.go:126] duration metric: took 1.008407322s to wait for k8s-apps to be running ...
	I1101 09:56:50.778930  314033 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:56:50.778993  314033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:56:50.791736  314033 system_svc.go:56] duration metric: took 12.796369ms WaitForService to wait for kubelet
	I1101 09:56:50.791753  314033 kubeadm.go:587] duration metric: took 2.154227026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:56:50.791770  314033 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:56:50.794632  314033 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:56:50.794646  314033 node_conditions.go:123] node cpu capacity is 2
	I1101 09:56:50.794656  314033 node_conditions.go:105] duration metric: took 2.881637ms to run NodePressure ...
	I1101 09:56:50.794666  314033 start.go:242] waiting for startup goroutines ...
	I1101 09:56:50.794672  314033 start.go:247] waiting for cluster config update ...
	I1101 09:56:50.794682  314033 start.go:256] writing updated cluster config ...
	I1101 09:56:50.794977  314033 ssh_runner.go:195] Run: rm -f paused
	I1101 09:56:50.798513  314033 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:56:50.801688  314033 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r9l4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:56:50.805854  314033 pod_ready.go:94] pod "coredns-66bc5c9577-r9l4n" is "Ready"
	I1101 09:56:50.805867  314033 pod_ready.go:86] duration metric: took 4.167284ms for pod "coredns-66bc5c9577-r9l4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:56:50.808004  314033 pod_ready.go:83] waiting for pod "etcd-functional-839033" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:56:52.814866  314033 pod_ready.go:104] pod "etcd-functional-839033" is not "Ready", error: <nil>
	I1101 09:56:53.813208  314033 pod_ready.go:94] pod "etcd-functional-839033" is "Ready"
	I1101 09:56:53.813223  314033 pod_ready.go:86] duration metric: took 3.005208895s for pod "etcd-functional-839033" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:56:53.815737  314033 pod_ready.go:83] waiting for pod "kube-apiserver-functional-839033" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:56:55.820917  314033 pod_ready.go:104] pod "kube-apiserver-functional-839033" is not "Ready", error: <nil>
	W1101 09:56:57.822418  314033 pod_ready.go:104] pod "kube-apiserver-functional-839033" is not "Ready", error: <nil>
	I1101 09:56:58.821940  314033 pod_ready.go:94] pod "kube-apiserver-functional-839033" is "Ready"
	I1101 09:56:58.821954  314033 pod_ready.go:86] duration metric: took 5.00620628s for pod "kube-apiserver-functional-839033" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:56:58.824555  314033 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-839033" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:56:58.829235  314033 pod_ready.go:94] pod "kube-controller-manager-functional-839033" is "Ready"
	I1101 09:56:58.829250  314033 pod_ready.go:86] duration metric: took 4.682668ms for pod "kube-controller-manager-functional-839033" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:56:58.831441  314033 pod_ready.go:83] waiting for pod "kube-proxy-xwq72" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:56:58.835930  314033 pod_ready.go:94] pod "kube-proxy-xwq72" is "Ready"
	I1101 09:56:58.835944  314033 pod_ready.go:86] duration metric: took 4.491176ms for pod "kube-proxy-xwq72" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:56:58.838391  314033 pod_ready.go:83] waiting for pod "kube-scheduler-functional-839033" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:56:59.020598  314033 pod_ready.go:94] pod "kube-scheduler-functional-839033" is "Ready"
	I1101 09:56:59.020613  314033 pod_ready.go:86] duration metric: took 182.210051ms for pod "kube-scheduler-functional-839033" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:56:59.020623  314033 pod_ready.go:40] duration metric: took 8.222090336s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:56:59.083016  314033 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:56:59.086384  314033 out.go:179] * Done! kubectl is now configured to use "functional-839033" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:57:38 functional-839033 crio[3517]: time="2025-11-01T09:57:38.529815752Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-4hf2l Namespace:default ID:ca387e6e92ef274f6b8483ff4231131908ab0354860b18019e95ed51959b51a6 UID:086d77f0-be2b-4ea9-a182-0e04d0927d22 NetNS:/var/run/netns/77cc2b92-a64b-486e-a83a-f8580cad4be5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049f800}] Aliases:map[]}"
	Nov 01 09:57:38 functional-839033 crio[3517]: time="2025-11-01T09:57:38.529970773Z" level=info msg="Checking pod default_hello-node-75c85bcc94-4hf2l for CNI network kindnet (type=ptp)"
	Nov 01 09:57:38 functional-839033 crio[3517]: time="2025-11-01T09:57:38.532801415Z" level=info msg="Ran pod sandbox ca387e6e92ef274f6b8483ff4231131908ab0354860b18019e95ed51959b51a6 with infra container: default/hello-node-75c85bcc94-4hf2l/POD" id=9cc634c7-9c90-40bf-9a5f-937140694d63 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:57:38 functional-839033 crio[3517]: time="2025-11-01T09:57:38.536609681Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=85e168d0-7f15-4b2a-8478-91c32284bf60 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.468335799Z" level=info msg="Stopping pod sandbox: fad7c5db27f177a2ce69f117eab0e522a47f4ffeecccf4221dd301d3aebcfd18" id=f41f04be-5164-44c6-96ac-22c2c0ce4c38 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.468402188Z" level=info msg="Stopped pod sandbox (already stopped): fad7c5db27f177a2ce69f117eab0e522a47f4ffeecccf4221dd301d3aebcfd18" id=f41f04be-5164-44c6-96ac-22c2c0ce4c38 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.469023378Z" level=info msg="Removing pod sandbox: fad7c5db27f177a2ce69f117eab0e522a47f4ffeecccf4221dd301d3aebcfd18" id=3e27974f-227d-4ef2-a958-852d7ca76f05 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.472839069Z" level=info msg="Removed pod sandbox: fad7c5db27f177a2ce69f117eab0e522a47f4ffeecccf4221dd301d3aebcfd18" id=3e27974f-227d-4ef2-a958-852d7ca76f05 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.473482118Z" level=info msg="Stopping pod sandbox: 2dd99b75a79f46418321b040ee05556115da5b68155fcb1c60f4c5fa9ea88ffe" id=dc15984c-ee98-4851-b435-9d2d5fd3945a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.473532843Z" level=info msg="Stopped pod sandbox (already stopped): 2dd99b75a79f46418321b040ee05556115da5b68155fcb1c60f4c5fa9ea88ffe" id=dc15984c-ee98-4851-b435-9d2d5fd3945a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.473854355Z" level=info msg="Removing pod sandbox: 2dd99b75a79f46418321b040ee05556115da5b68155fcb1c60f4c5fa9ea88ffe" id=2fe91e58-383e-44d9-ba1f-9975328d97f9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.477265186Z" level=info msg="Removed pod sandbox: 2dd99b75a79f46418321b040ee05556115da5b68155fcb1c60f4c5fa9ea88ffe" id=2fe91e58-383e-44d9-ba1f-9975328d97f9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.477775146Z" level=info msg="Stopping pod sandbox: 95886b9dbbb554e842904cc8445af97ad8a271f483ceb915f04ad1a43a33f12b" id=26d3e6b6-75aa-44e7-bae2-3848a543169f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.477829949Z" level=info msg="Stopped pod sandbox (already stopped): 95886b9dbbb554e842904cc8445af97ad8a271f483ceb915f04ad1a43a33f12b" id=26d3e6b6-75aa-44e7-bae2-3848a543169f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.478133951Z" level=info msg="Removing pod sandbox: 95886b9dbbb554e842904cc8445af97ad8a271f483ceb915f04ad1a43a33f12b" id=03278534-7884-4640-902b-b07d028f1465 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:57:41 functional-839033 crio[3517]: time="2025-11-01T09:57:41.481484171Z" level=info msg="Removed pod sandbox: 95886b9dbbb554e842904cc8445af97ad8a271f483ceb915f04ad1a43a33f12b" id=03278534-7884-4640-902b-b07d028f1465 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:57:51 functional-839033 crio[3517]: time="2025-11-01T09:57:51.428122883Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2ad973dd-fd90-4ad7-9f34-26ee5780edcd name=/runtime.v1.ImageService/PullImage
	Nov 01 09:58:04 functional-839033 crio[3517]: time="2025-11-01T09:58:04.425662788Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=744d363e-f76e-4679-b234-56cc4dd4f062 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:58:17 functional-839033 crio[3517]: time="2025-11-01T09:58:17.425992226Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c2b2849a-edc0-4a60-8168-5cae7e7e6d9c name=/runtime.v1.ImageService/PullImage
	Nov 01 09:58:45 functional-839033 crio[3517]: time="2025-11-01T09:58:45.427947729Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ab5f8355-c981-48e2-8514-a78aa622f8bf name=/runtime.v1.ImageService/PullImage
	Nov 01 09:59:07 functional-839033 crio[3517]: time="2025-11-01T09:59:07.426727012Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b3d5232b-2905-4b5c-ba89-0bb3eebd5469 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:00:15 functional-839033 crio[3517]: time="2025-11-01T10:00:15.426784735Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=291178c3-cd4a-489f-8b4a-6c5434c008b0 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:00:32 functional-839033 crio[3517]: time="2025-11-01T10:00:32.426417144Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6935c762-ac76-48b9-875d-a8f01b8fef9a name=/runtime.v1.ImageService/PullImage
	Nov 01 10:03:04 functional-839033 crio[3517]: time="2025-11-01T10:03:04.426654421Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b931c749-bb44-4048-95ea-1df6d042ce20 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:03:19 functional-839033 crio[3517]: time="2025-11-01T10:03:19.428126807Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2c6736bc-c9ca-4eb2-8ae0-cce8edbc370f name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	910fd45421111       docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424   9 minutes ago       Running             myfrontend                0                   8f890c2c0f7d0       sp-pod                                      default
	854e6882df8d2       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   850c704b8e353       nginx-svc                                   default
	7e9030a93c5fa       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       4                   1c390d810db0c       storage-provisioner                         kube-system
	b77201b0d49a4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            1                   a22ca5dbd8930       kube-apiserver-functional-839033            kube-system
	5cb459f4f7c0b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   e24d81d69bb02       coredns-66bc5c9577-r9l4n                    kube-system
	61d50fc28eb22       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Exited              storage-provisioner       3                   1c390d810db0c       storage-provisioner                         kube-system
	4fec0acfc9b53       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Exited              kube-apiserver            0                   a22ca5dbd8930       kube-apiserver-functional-839033            kube-system
	b0e5b6f23f460       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   b5384135f35f1       kube-scheduler-functional-839033            kube-system
	258af7b7150c9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   8200cabdd9366       kindnet-9849s                               kube-system
	ecff2877dd0d1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   ccfe72d685bef       kube-controller-manager-functional-839033   kube-system
	b8b40b83ecb51       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   a1b82642c0567       etcd-functional-839033                      kube-system
	fe99302d6bba0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   c3749b13325d2       kube-proxy-xwq72                            kube-system
	186dffdc84525       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   b5384135f35f1       kube-scheduler-functional-839033            kube-system
	5b137764de739       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   ccfe72d685bef       kube-controller-manager-functional-839033   kube-system
	4384f1c383464       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   a1b82642c0567       etcd-functional-839033                      kube-system
	8983a5a37fc6d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   e24d81d69bb02       coredns-66bc5c9577-r9l4n                    kube-system
	aaa4f4b1b02a8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   c3749b13325d2       kube-proxy-xwq72                            kube-system
	1f3b78decd05b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   8200cabdd9366       kindnet-9849s                               kube-system
	
	
	==> coredns [5cb459f4f7c0bd8201f20f4016f4cf05f13551a1db4f7a182ce3ac2a4693d5db] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54519 - 41330 "HINFO IN 2375746087868792523.5470680187189647213. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018052867s
	
	
	==> coredns [8983a5a37fc6dbbc399f4dd6a26d3f4139deb5284d4e6297f9c98cc0b042307a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55252 - 57597 "HINFO IN 9155018796952709272.6786712540175958634. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018540289s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-839033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-839033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=functional-839033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_54_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:54:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-839033
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:07:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:07:04 +0000   Sat, 01 Nov 2025 09:54:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:07:04 +0000   Sat, 01 Nov 2025 09:54:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:07:04 +0000   Sat, 01 Nov 2025 09:54:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:07:04 +0000   Sat, 01 Nov 2025 09:55:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-839033
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                76b1a8a3-8a27-4cd7-b1e0-70c642093aed
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-4hf2l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  default                     hello-node-connect-7d85dfc575-4prgw          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-r9l4n                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-839033                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-9849s                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-839033             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-839033    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-xwq72                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-839033             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-839033 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-839033 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-839033 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-839033 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-839033 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-839033 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-839033 event: Registered Node functional-839033 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-839033 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-839033 event: Registered Node functional-839033 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-839033 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-839033 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-839033 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-839033 event: Registered Node functional-839033 in Controller
	
	
	==> dmesg <==
	[Nov 1 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014607] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.506888] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032735] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.832337] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.644621] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 1 09:37] hrtimer: interrupt took 44045431 ns
	[Nov 1 09:38] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Nov 1 09:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 09:47] overlayfs: idmapped layers are currently not supported
	[  +0.058238] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 1 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4384f1c383464764bc763d31c520be563d2c9486a163e2dfbbfefc9dc9d676c3] <==
	{"level":"warn","ts":"2025-11-01T09:55:58.389961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:58.425821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:58.465166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:58.487607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:58.505598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:58.524431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:58.582106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:56:23.585643Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:56:23.585697Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-839033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T09:56:23.585799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:56:23.738807Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:56:23.738893Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:56:23.738915Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-01T09:56:23.738984Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T09:56:23.739000Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T09:56:23.739063Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:56:23.739096Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:56:23.739106Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:56:23.739147Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:56:23.739162Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:56:23.739168Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:56:23.743112Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T09:56:23.743193Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:56:23.743270Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T09:56:23.743298Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-839033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [b8b40b83ecb51e77a9bfde5be425a0c2c8e96cdbfd75794537b2135686e2c00b] <==
	{"level":"warn","ts":"2025-11-01T09:56:45.334887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.359860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.380184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.401926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.419165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.482483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.504584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.515098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.533553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.550018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.562523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.581236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.601280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.614094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.632329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.663173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.677505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.695198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.723648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.739710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.756377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:56:45.806755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53428","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:06:44.524813Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1115}
	{"level":"info","ts":"2025-11-01T10:06:44.548594Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1115,"took":"23.392123ms","hash":1617904832,"current-db-size-bytes":3280896,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1462272,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-01T10:06:44.548643Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1617904832,"revision":1115,"compact-revision":-1}
	
	
	==> kernel <==
	 10:07:26 up  1:49,  0 user,  load average: 0.08, 0.37, 1.39
	Linux functional-839033 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1f3b78decd05b7af9e8fcadb0df3bf154e56e65a4ef6993bd03be4de9b420555] <==
	I1101 09:55:54.940763       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:55:55.025525       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1101 09:55:55.025647       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:55:55.025660       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:55:55.025672       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:55:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:55:55.235079       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:55:55.235171       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:55:55.235238       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:55:55.261799       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:55:55.262105       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:55:55.262277       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:55:55.262434       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:55:55.262574       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1101 09:55:59.835443       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:55:59.835474       1 metrics.go:72] Registering metrics
	I1101 09:55:59.835538       1 controller.go:711] "Syncing nftables rules"
	I1101 09:56:05.235374       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:56:05.235416       1 main.go:301] handling current node
	I1101 09:56:15.235310       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:56:15.235355       1 main.go:301] handling current node
	
	
	==> kindnet [258af7b7150c9abd829acfa1ec6ef27d643890b4812cc39d10ddcb96e5bacae1] <==
	I1101 10:05:18.870407       1 main.go:301] handling current node
	I1101 10:05:28.870357       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:05:28.870414       1 main.go:301] handling current node
	I1101 10:05:38.870096       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:05:38.870218       1 main.go:301] handling current node
	I1101 10:05:48.870270       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:05:48.870309       1 main.go:301] handling current node
	I1101 10:05:58.870413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:05:58.870474       1 main.go:301] handling current node
	I1101 10:06:08.869488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:06:08.869525       1 main.go:301] handling current node
	I1101 10:06:18.870247       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:06:18.870300       1 main.go:301] handling current node
	I1101 10:06:28.869431       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:06:28.869466       1 main.go:301] handling current node
	I1101 10:06:38.871569       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:06:38.871606       1 main.go:301] handling current node
	I1101 10:06:48.869626       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:06:48.869662       1 main.go:301] handling current node
	I1101 10:06:58.869568       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:06:58.869620       1 main.go:301] handling current node
	I1101 10:07:08.870221       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:07:08.870259       1 main.go:301] handling current node
	I1101 10:07:18.870364       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:07:18.870428       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4fec0acfc9b538371bfc88b54fca79448463b280579f926deb4ee9e9e798a45f] <==
	I1101 09:56:43.109752       1 options.go:263] external host was not specified, using 192.168.49.2
	I1101 09:56:43.112438       1 server.go:150] Version: v1.34.1
	I1101 09:56:43.112567       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1101 09:56:43.112846       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [b77201b0d49a4019fbf4542f76cc8491e9668ab81b8e52374628c4da93f3df90] <==
	I1101 09:56:46.701747       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:56:46.754473       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:56:46.760823       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:56:46.761716       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:56:46.764815       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:56:46.764841       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:56:46.764985       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:56:46.766784       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:56:47.358520       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:56:48.352945       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:56:48.458723       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:56:48.470717       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:56:48.541989       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:56:48.550502       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:56:50.130865       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:56:50.329201       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:56:50.379026       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:57:02.862693       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.85.97"}
	I1101 09:57:14.982245       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.71.197"}
	I1101 09:57:24.662669       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.170.178"}
	E1101 09:57:31.252827       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57588: use of closed network connection
	E1101 09:57:31.765616       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1101 09:57:38.077631       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57628: use of closed network connection
	I1101 09:57:38.309728       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.94.179"}
	I1101 10:06:46.647772       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5b137764de73916eb6c78bc20a157c135eb998beefd7cc8c6d9bca21a9a1de2f] <==
	I1101 09:56:03.091906       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:56:03.092002       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:56:03.092035       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:56:03.093820       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:56:03.095528       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:56:03.111329       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:56:03.113682       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:56:03.113883       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:56:03.114099       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:56:03.114110       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:56:03.115281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:56:03.116469       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:56:03.116571       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:56:03.116631       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:56:03.116989       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:56:03.117054       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:56:03.116992       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:56:03.117126       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-839033"
	I1101 09:56:03.117257       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:56:03.119693       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:56:03.124399       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:56:03.127706       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:56:03.129994       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:56:03.133319       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:56:03.139592       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-controller-manager [ecff2877dd0d191e77cf957ff86b95bf806ef7a28c0dc1fa0a082ba33a6d8b38] <==
	I1101 09:56:49.982426       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:56:49.982640       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:56:49.985866       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:56:49.991203       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:56:49.994471       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:56:49.997346       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:56:49.999469       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:56:50.006934       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:56:50.007568       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:56:50.007814       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:56:50.007918       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:56:50.009488       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:56:50.013025       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:56:50.017210       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:56:50.018687       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:56:50.019444       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:56:50.021896       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:56:50.022992       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:56:50.023035       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:56:50.023065       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:56:50.023510       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:56:50.025822       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:56:50.034236       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:56:50.034269       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:56:50.034278       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [aaa4f4b1b02a84ef1fb671ccf5c7867210146238517165d4f4c6826c8466897c] <==
	I1101 09:55:55.079369       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:55:55.276499       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:55:59.821359       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:55:59.821416       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:55:59.821488       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:55:59.884518       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:55:59.884579       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:55:59.999585       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:55:59.999932       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:55:59.999959       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:56:00.063814       1 config.go:200] "Starting service config controller"
	I1101 09:56:00.063839       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:56:00.063888       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:56:00.063893       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:56:00.063909       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:56:00.063913       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:56:00.115928       1 config.go:309] "Starting node config controller"
	I1101 09:56:00.115959       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:56:00.115969       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:56:00.177164       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:56:00.177196       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:56:00.177238       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fe99302d6bba0109a2a18b0e0649bf16d88a3c1e2afc5c75477de00995596607] <==
	I1101 09:56:38.477766       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:56:38.679445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:56:42.780978       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:56:42.781010       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:56:42.781086       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:56:42.824232       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:56:42.824742       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:56:42.831270       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:56:42.831657       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:56:42.831870       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:56:42.833327       1 config.go:200] "Starting service config controller"
	I1101 09:56:42.833380       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:56:42.833400       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:56:42.833404       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:56:42.833414       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:56:42.833419       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:56:42.843376       1 config.go:309] "Starting node config controller"
	I1101 09:56:42.843409       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:56:42.843418       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:56:42.933701       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:56:42.933778       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:56:42.933793       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [186dffdc84525c39ce066399bf6f59c88cc80a5566fc5eb01ee87408fe93780f] <==
	I1101 09:55:58.424102       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:56:00.264814       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:56:00.264858       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:56:00.329879       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:56:00.331357       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:56:00.346261       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:56:00.331288       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:56:00.331383       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:56:00.346354       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:56:00.331394       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:56:00.346752       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:56:00.462720       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:56:00.462813       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:56:00.583762       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:56:23.591085       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:56:23.591775       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 09:56:23.591854       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1101 09:56:23.591959       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:56:23.592154       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:56:23.592299       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:56:23.592313       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 09:56:23.592362       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b0e5b6f23f460599e0b89ad085edc4ec1a608d4017ef4427e6fe291167497538] <==
	I1101 09:56:44.796542       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:56:46.604396       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:56:46.604501       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:56:46.604537       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:56:46.604567       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:56:46.689508       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:56:46.694415       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:56:46.700698       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:56:46.700817       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:56:46.701464       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:56:46.701979       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:56:46.801874       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:04:51 functional-839033 kubelet[3964]: E1101 10:04:51.426227    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:04:58 functional-839033 kubelet[3964]: E1101 10:04:58.425542    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	Nov 01 10:05:02 functional-839033 kubelet[3964]: E1101 10:05:02.425573    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:05:09 functional-839033 kubelet[3964]: E1101 10:05:09.426280    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	Nov 01 10:05:15 functional-839033 kubelet[3964]: E1101 10:05:15.425707    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:05:22 functional-839033 kubelet[3964]: E1101 10:05:22.425847    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	Nov 01 10:05:28 functional-839033 kubelet[3964]: E1101 10:05:28.425620    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:05:33 functional-839033 kubelet[3964]: E1101 10:05:33.425532    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	Nov 01 10:05:40 functional-839033 kubelet[3964]: E1101 10:05:40.425973    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:05:47 functional-839033 kubelet[3964]: E1101 10:05:47.425752    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	Nov 01 10:05:53 functional-839033 kubelet[3964]: E1101 10:05:53.425585    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:06:00 functional-839033 kubelet[3964]: E1101 10:06:00.425999    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	Nov 01 10:06:05 functional-839033 kubelet[3964]: E1101 10:06:05.425769    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:06:12 functional-839033 kubelet[3964]: E1101 10:06:12.425841    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	Nov 01 10:06:18 functional-839033 kubelet[3964]: E1101 10:06:18.425783    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:06:23 functional-839033 kubelet[3964]: E1101 10:06:23.425503    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	Nov 01 10:06:29 functional-839033 kubelet[3964]: E1101 10:06:29.427079    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:06:38 functional-839033 kubelet[3964]: E1101 10:06:38.425993    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	Nov 01 10:06:40 functional-839033 kubelet[3964]: E1101 10:06:40.425377    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:06:49 functional-839033 kubelet[3964]: E1101 10:06:49.426050    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	Nov 01 10:06:51 functional-839033 kubelet[3964]: E1101 10:06:51.426319    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:07:04 functional-839033 kubelet[3964]: E1101 10:07:04.425363    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	Nov 01 10:07:05 functional-839033 kubelet[3964]: E1101 10:07:05.426081    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:07:18 functional-839033 kubelet[3964]: E1101 10:07:18.425610    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4prgw" podUID="eef2e76e-2b0d-4647-9145-488ac3ab77c1"
	Nov 01 10:07:18 functional-839033 kubelet[3964]: E1101 10:07:18.425688    3964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4hf2l" podUID="086d77f0-be2b-4ea9-a182-0e04d0927d22"
	
	
	==> storage-provisioner [61d50fc28eb22ec65a2ff2f828e7c7a00034cffcc5dad1cf44a4841c863861a8] <==
	I1101 09:56:43.653389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:56:43.655302       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [7e9030a93c5fa248b2837726f95b3e15d76abdb74b7f6634763d8f8902afa946] <==
	W1101 10:07:01.783589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:03.786652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:03.791039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:05.793889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:05.797970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:07.801270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:07.808227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:09.811877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:09.816413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:11.819723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:11.826664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:13.829419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:13.834229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:15.837429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:15.842096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:17.845735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:17.853202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:19.856758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:19.863455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:21.866596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:21.871037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:23.874635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:23.879326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:25.885787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:07:25.899098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-839033 -n functional-839033
helpers_test.go:269: (dbg) Run:  kubectl --context functional-839033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-4hf2l hello-node-connect-7d85dfc575-4prgw
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-839033 describe pod hello-node-75c85bcc94-4hf2l hello-node-connect-7d85dfc575-4prgw
helpers_test.go:290: (dbg) kubectl --context functional-839033 describe pod hello-node-75c85bcc94-4hf2l hello-node-connect-7d85dfc575-4prgw:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-4hf2l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-839033/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 09:57:38 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b9sz8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-b9sz8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m49s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-4hf2l to functional-839033
	  Normal   Pulling    6m56s (x5 over 9m50s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m56s (x5 over 9m50s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m56s (x5 over 9m50s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m47s (x21 over 9m50s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m47s (x21 over 9m50s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-4prgw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-839033/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 09:57:24 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zmp7t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zmp7t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-4prgw to functional-839033
	  Normal   Pulling    7m13s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m13s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m13s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m2s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m50s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image load --daemon kicbase/echo-server:functional-839033 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-839033" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image load --daemon kicbase/echo-server:functional-839033 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-839033" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-839033
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image load --daemon kicbase/echo-server:functional-839033 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-839033" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image save kicbase/echo-server:functional-839033 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1101 09:57:13.832368  317964 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:13.832538  317964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:13.832549  317964 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:13.832554  317964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:13.832793  317964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:57:13.833412  317964 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:13.833536  317964 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:13.833987  317964 cli_runner.go:164] Run: docker container inspect functional-839033 --format={{.State.Status}}
	I1101 09:57:13.856691  317964 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:13.856741  317964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
	I1101 09:57:13.879851  317964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
	I1101 09:57:13.991447  317964 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1101 09:57:13.991517  317964 cache_images.go:255] Failed to load cached images for "functional-839033": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1101 09:57:13.991538  317964 cache_images.go:267] failed pushing to: functional-839033

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-839033
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image save --daemon kicbase/echo-server:functional-839033 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-839033
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-839033: exit status 1 (15.65815ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-839033

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-839033

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-839033 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-839033 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-4hf2l" [086d77f0-be2b-4ea9-a182-0e04d0927d22] Pending
helpers_test.go:352: "hello-node-75c85bcc94-4hf2l" [086d77f0-be2b-4ea9-a182-0e04d0927d22] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1101 09:57:47.541262  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:00:03.676883  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:00:31.383104  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:05:03.676686  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-839033 -n functional-839033
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-01 10:07:38.728056978 +0000 UTC m=+1238.854508846
functional_test.go:1460: (dbg) Run:  kubectl --context functional-839033 describe po hello-node-75c85bcc94-4hf2l -n default
functional_test.go:1460: (dbg) kubectl --context functional-839033 describe po hello-node-75c85bcc94-4hf2l -n default:
Name:             hello-node-75c85bcc94-4hf2l
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-839033/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:57:38 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b9sz8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-b9sz8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-4hf2l to functional-839033
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m57s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m57s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-839033 logs hello-node-75c85bcc94-4hf2l -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-839033 logs hello-node-75c85bcc94-4hf2l -n default: exit status 1 (177.93616ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-4hf2l" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-839033 logs hello-node-75c85bcc94-4hf2l -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 service --namespace=default --https --url hello-node: exit status 115 (561.042127ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31257
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-839033 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 service hello-node --url --format={{.IP}}: exit status 115 (542.574182ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-839033 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 service hello-node --url: exit status 115 (529.969933ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31257
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-839033 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31257
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-827677 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-827677 --output=json --user=testUser: exit status 80 (1.77275448s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64489471-0de7-4680-b5ed-bc10d3164e30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-827677 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"62ab7e01-e414-4155-8501-0548a5a13673","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T10:20:30Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"f198e7d6-428a-4356-97d7-ca8ac4b75efd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-827677 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.82s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-827677 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-827677 --output=json --user=testUser: exit status 80 (1.823494977s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ce169a17-1959-4c4f-8b76-950baf05f074","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-827677 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"c0af6e49-a24c-4f4c-914c-1a1a548c7d1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T10:20:32Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"503e4892-fd72-4634-8fa1-f07e4801437b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-827677 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.82s)

                                                
                                    
x
+
TestPause/serial/Pause (6.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-524446 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-524446 --alsologtostderr -v=5: exit status 80 (1.891066741s)

                                                
                                                
-- stdout --
	* Pausing node pause-524446 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:43:03.262125  455321 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:03.262956  455321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:03.262971  455321 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:03.262976  455321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:03.263253  455321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:43:03.263526  455321 out.go:368] Setting JSON to false
	I1101 10:43:03.263554  455321 mustload.go:66] Loading cluster: pause-524446
	I1101 10:43:03.264053  455321 config.go:182] Loaded profile config "pause-524446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:03.264590  455321 cli_runner.go:164] Run: docker container inspect pause-524446 --format={{.State.Status}}
	I1101 10:43:03.287765  455321 host.go:66] Checking if "pause-524446" exists ...
	I1101 10:43:03.288105  455321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:03.351254  455321 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:43:03.341405272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:43:03.351916  455321 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-524446 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:43:03.355046  455321 out.go:179] * Pausing node pause-524446 ... 
	I1101 10:43:03.358809  455321 host.go:66] Checking if "pause-524446" exists ...
	I1101 10:43:03.359148  455321 ssh_runner.go:195] Run: systemctl --version
	I1101 10:43:03.359208  455321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:43:03.378015  455321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/pause-524446/id_rsa Username:docker}
	I1101 10:43:03.484078  455321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:03.497028  455321 pause.go:52] kubelet running: true
	I1101 10:43:03.497097  455321 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:43:03.724242  455321 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:43:03.724341  455321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:43:03.796449  455321 cri.go:89] found id: "562d7a480bd8b150f3ff5490ca57f085f8cce74515fcca6f6b3a0da9f8c3e804"
	I1101 10:43:03.796475  455321 cri.go:89] found id: "0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0"
	I1101 10:43:03.796480  455321 cri.go:89] found id: "4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace"
	I1101 10:43:03.796483  455321 cri.go:89] found id: "c7cc427c765d5f0268ac160214900f067b479bdb07862e5cff7adbb1edbed5bf"
	I1101 10:43:03.796487  455321 cri.go:89] found id: "c90f5c689ca41496470b62fe52d1d0fbba1fd9dfb4a55c9bb3bc66906f0213d0"
	I1101 10:43:03.796490  455321 cri.go:89] found id: "feb86306de65e800bdcacea118cfeb11cf011bdbd9410d36359d2de63e40e91f"
	I1101 10:43:03.796493  455321 cri.go:89] found id: "e6846cf4faaf8570defebff2c13c97d96e24a6c68f780e2878dfc1550e88dd21"
	I1101 10:43:03.796496  455321 cri.go:89] found id: "4e118b3a4f353c5d20da38b1b32e1892b43e71a1fc32c7794559b9e357567505"
	I1101 10:43:03.796499  455321 cri.go:89] found id: "7af788e2c649b3573c775b3824d9e334bcc1638c0fff42cb56de79e9832c2866"
	I1101 10:43:03.796531  455321 cri.go:89] found id: "d4694a41e4759b5ed3c113f391ee45c1533da5781f43154eb18a5c37c530d6f4"
	I1101 10:43:03.796541  455321 cri.go:89] found id: "d01823d33f87dc24cdc737ad3d46369a7ac999c0e626cba3ebe1039b23c0ea56"
	I1101 10:43:03.796545  455321 cri.go:89] found id: "81746508ca1cda9731c90bead6a9925450ea0a9dbc2627c6c3ccbb245e90b516"
	I1101 10:43:03.796548  455321 cri.go:89] found id: "33b82756faa61b08a4a452bdc72a95129c5eae8d424452ca97c3b65a03880595"
	I1101 10:43:03.796552  455321 cri.go:89] found id: "f367c628d682bb258e0b9abe783adb0a9d4c25ac0de1c1c324d2da0d34b69daa"
	I1101 10:43:03.796556  455321 cri.go:89] found id: ""
	I1101 10:43:03.796626  455321 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:43:03.808507  455321 retry.go:31] will retry after 218.267897ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:03Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:43:04.026988  455321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:04.040454  455321 pause.go:52] kubelet running: false
	I1101 10:43:04.040539  455321 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:43:04.186655  455321 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:43:04.186757  455321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:43:04.268350  455321 cri.go:89] found id: "562d7a480bd8b150f3ff5490ca57f085f8cce74515fcca6f6b3a0da9f8c3e804"
	I1101 10:43:04.268385  455321 cri.go:89] found id: "0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0"
	I1101 10:43:04.268391  455321 cri.go:89] found id: "4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace"
	I1101 10:43:04.268394  455321 cri.go:89] found id: "c7cc427c765d5f0268ac160214900f067b479bdb07862e5cff7adbb1edbed5bf"
	I1101 10:43:04.268398  455321 cri.go:89] found id: "c90f5c689ca41496470b62fe52d1d0fbba1fd9dfb4a55c9bb3bc66906f0213d0"
	I1101 10:43:04.268401  455321 cri.go:89] found id: "feb86306de65e800bdcacea118cfeb11cf011bdbd9410d36359d2de63e40e91f"
	I1101 10:43:04.268404  455321 cri.go:89] found id: "e6846cf4faaf8570defebff2c13c97d96e24a6c68f780e2878dfc1550e88dd21"
	I1101 10:43:04.268407  455321 cri.go:89] found id: "4e118b3a4f353c5d20da38b1b32e1892b43e71a1fc32c7794559b9e357567505"
	I1101 10:43:04.268410  455321 cri.go:89] found id: "7af788e2c649b3573c775b3824d9e334bcc1638c0fff42cb56de79e9832c2866"
	I1101 10:43:04.268433  455321 cri.go:89] found id: "d4694a41e4759b5ed3c113f391ee45c1533da5781f43154eb18a5c37c530d6f4"
	I1101 10:43:04.268450  455321 cri.go:89] found id: "d01823d33f87dc24cdc737ad3d46369a7ac999c0e626cba3ebe1039b23c0ea56"
	I1101 10:43:04.268454  455321 cri.go:89] found id: "81746508ca1cda9731c90bead6a9925450ea0a9dbc2627c6c3ccbb245e90b516"
	I1101 10:43:04.268457  455321 cri.go:89] found id: "33b82756faa61b08a4a452bdc72a95129c5eae8d424452ca97c3b65a03880595"
	I1101 10:43:04.268460  455321 cri.go:89] found id: "f367c628d682bb258e0b9abe783adb0a9d4c25ac0de1c1c324d2da0d34b69daa"
	I1101 10:43:04.268463  455321 cri.go:89] found id: ""
	I1101 10:43:04.268550  455321 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:43:04.281895  455321 retry.go:31] will retry after 480.184222ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:04Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:43:04.762297  455321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:04.777694  455321 pause.go:52] kubelet running: false
	I1101 10:43:04.777808  455321 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:43:04.973529  455321 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:43:04.973663  455321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:43:05.063694  455321 cri.go:89] found id: "562d7a480bd8b150f3ff5490ca57f085f8cce74515fcca6f6b3a0da9f8c3e804"
	I1101 10:43:05.063747  455321 cri.go:89] found id: "0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0"
	I1101 10:43:05.063754  455321 cri.go:89] found id: "4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace"
	I1101 10:43:05.063781  455321 cri.go:89] found id: "c7cc427c765d5f0268ac160214900f067b479bdb07862e5cff7adbb1edbed5bf"
	I1101 10:43:05.063793  455321 cri.go:89] found id: "c90f5c689ca41496470b62fe52d1d0fbba1fd9dfb4a55c9bb3bc66906f0213d0"
	I1101 10:43:05.063798  455321 cri.go:89] found id: "feb86306de65e800bdcacea118cfeb11cf011bdbd9410d36359d2de63e40e91f"
	I1101 10:43:05.063802  455321 cri.go:89] found id: "e6846cf4faaf8570defebff2c13c97d96e24a6c68f780e2878dfc1550e88dd21"
	I1101 10:43:05.063805  455321 cri.go:89] found id: "4e118b3a4f353c5d20da38b1b32e1892b43e71a1fc32c7794559b9e357567505"
	I1101 10:43:05.063809  455321 cri.go:89] found id: "7af788e2c649b3573c775b3824d9e334bcc1638c0fff42cb56de79e9832c2866"
	I1101 10:43:05.063821  455321 cri.go:89] found id: "d4694a41e4759b5ed3c113f391ee45c1533da5781f43154eb18a5c37c530d6f4"
	I1101 10:43:05.063825  455321 cri.go:89] found id: "d01823d33f87dc24cdc737ad3d46369a7ac999c0e626cba3ebe1039b23c0ea56"
	I1101 10:43:05.063828  455321 cri.go:89] found id: "81746508ca1cda9731c90bead6a9925450ea0a9dbc2627c6c3ccbb245e90b516"
	I1101 10:43:05.063831  455321 cri.go:89] found id: "33b82756faa61b08a4a452bdc72a95129c5eae8d424452ca97c3b65a03880595"
	I1101 10:43:05.063837  455321 cri.go:89] found id: "f367c628d682bb258e0b9abe783adb0a9d4c25ac0de1c1c324d2da0d34b69daa"
	I1101 10:43:05.063857  455321 cri.go:89] found id: ""
	I1101 10:43:05.063909  455321 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:43:05.085221  455321 out.go:203] 
	W1101 10:43:05.088218  455321 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:43:05.088243  455321 out.go:285] * 
	* 
	W1101 10:43:05.094349  455321 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:43:05.097612  455321 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-524446 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-524446
helpers_test.go:243: (dbg) docker inspect pause-524446:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb",
	        "Created": "2025-11-01T10:41:20.236439178Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 449619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:41:20.307967461Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb/hostname",
	        "HostsPath": "/var/lib/docker/containers/faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb/hosts",
	        "LogPath": "/var/lib/docker/containers/faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb/faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb-json.log",
	        "Name": "/pause-524446",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-524446:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-524446",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb",
	                "LowerDir": "/var/lib/docker/overlay2/2e31acb5976be0087e3ede684f65ab6e050f6023357c27b17712a54e3e0726aa-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e31acb5976be0087e3ede684f65ab6e050f6023357c27b17712a54e3e0726aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e31acb5976be0087e3ede684f65ab6e050f6023357c27b17712a54e3e0726aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e31acb5976be0087e3ede684f65ab6e050f6023357c27b17712a54e3e0726aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-524446",
	                "Source": "/var/lib/docker/volumes/pause-524446/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-524446",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-524446",
	                "name.minikube.sigs.k8s.io": "pause-524446",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6574481d97f2bfa3d2cb8e4a7ece5dbbb1a9bce26973b3a5c4a41fca9b872f40",
	            "SandboxKey": "/var/run/docker/netns/6574481d97f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-524446": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:be:99:3c:b9:a5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34b1034fb6de1150d95cb2d577a32e1de985ff9a5dca5f188af786a956aadd65",
	                    "EndpointID": "a02f58532b604ad576b6b4ef75d97da009527e3b798eacce948cb8d50911bab6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-524446",
	                        "faec4cf7352d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-524446 -n pause-524446
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-524446 -n pause-524446: exit status 2 (470.091317ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-524446 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-524446 logs -n 25: (1.385202454s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-276658 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p missing-upgrade-941524 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-941524    │ jenkins │ v1.32.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p NoKubernetes-276658 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ delete  │ -p NoKubernetes-276658                                                                                                                   │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p NoKubernetes-276658 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p missing-upgrade-941524 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-941524    │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:39 UTC │
	│ ssh     │ -p NoKubernetes-276658 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ stop    │ -p NoKubernetes-276658                                                                                                                   │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p NoKubernetes-276658 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ ssh     │ -p NoKubernetes-276658 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ delete  │ -p NoKubernetes-276658                                                                                                                   │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:39 UTC │
	│ delete  │ -p missing-upgrade-941524                                                                                                                │ missing-upgrade-941524    │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ start   │ -p stopped-upgrade-124684 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-124684    │ jenkins │ v1.32.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ stop    │ -p kubernetes-upgrade-946953                                                                                                             │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │                     │
	│ stop    │ stopped-upgrade-124684 stop                                                                                                              │ stopped-upgrade-124684    │ jenkins │ v1.32.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ start   │ -p stopped-upgrade-124684 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-124684    │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:40 UTC │
	│ delete  │ -p stopped-upgrade-124684                                                                                                                │ stopped-upgrade-124684    │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ start   │ -p running-upgrade-700635 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-700635    │ jenkins │ v1.32.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ start   │ -p running-upgrade-700635 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-700635    │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p running-upgrade-700635                                                                                                                │ running-upgrade-700635    │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p pause-524446 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-524446              │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p pause-524446 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-524446              │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p pause-524446 --alsologtostderr -v=5                                                                                                   │ pause-524446              │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:42:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:42:36.353265  453808 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:42:36.353472  453808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:36.353499  453808 out.go:374] Setting ErrFile to fd 2...
	I1101 10:42:36.353518  453808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:36.353861  453808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:42:36.354281  453808 out.go:368] Setting JSON to false
	I1101 10:42:36.358491  453808 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8708,"bootTime":1761985048,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:42:36.358613  453808 start.go:143] virtualization:  
	I1101 10:42:36.363164  453808 out.go:179] * [pause-524446] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:42:36.366661  453808 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:42:36.366717  453808 notify.go:221] Checking for updates...
	I1101 10:42:36.370470  453808 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:42:36.374144  453808 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:42:36.377298  453808 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:42:36.380597  453808 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:42:36.384555  453808 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:42:36.388373  453808 config.go:182] Loaded profile config "pause-524446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:36.389117  453808 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:42:36.441417  453808 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:42:36.441548  453808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:36.531226  453808 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:42:36.520250994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:42:36.531344  453808 docker.go:319] overlay module found
	I1101 10:42:36.535045  453808 out.go:179] * Using the docker driver based on existing profile
	I1101 10:42:36.538088  453808 start.go:309] selected driver: docker
	I1101 10:42:36.538112  453808 start.go:930] validating driver "docker" against &{Name:pause-524446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-524446 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:36.538241  453808 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:42:36.538360  453808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:36.645234  453808 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:42:36.634613688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:42:36.645639  453808 cni.go:84] Creating CNI manager for ""
	I1101 10:42:36.645709  453808 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:36.645770  453808 start.go:353] cluster config:
	{Name:pause-524446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-524446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:36.650680  453808 out.go:179] * Starting "pause-524446" primary control-plane node in "pause-524446" cluster
	I1101 10:42:36.653533  453808 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:42:36.656455  453808 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:42:36.659339  453808 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:36.659407  453808 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:42:36.659421  453808 cache.go:59] Caching tarball of preloaded images
	I1101 10:42:36.659526  453808 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:42:36.659543  453808 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:42:36.659685  453808 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/config.json ...
	I1101 10:42:36.659932  453808 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:42:36.689096  453808 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:42:36.689119  453808 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:42:36.689133  453808 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:42:36.689156  453808 start.go:360] acquireMachinesLock for pause-524446: {Name:mk848fc020171d62027c0592a514cb787e1e6375 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:42:36.689211  453808 start.go:364] duration metric: took 38.236µs to acquireMachinesLock for "pause-524446"
	I1101 10:42:36.689231  453808 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:42:36.689237  453808 fix.go:54] fixHost starting: 
	I1101 10:42:36.689495  453808 cli_runner.go:164] Run: docker container inspect pause-524446 --format={{.State.Status}}
	I1101 10:42:36.714755  453808 fix.go:112] recreateIfNeeded on pause-524446: state=Running err=<nil>
	W1101 10:42:36.714789  453808 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:42:34.304108  439729 logs.go:123] Gathering logs for container status ...
	I1101 10:42:34.304147  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:42:36.836775  439729 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:42:36.717946  453808 out.go:252] * Updating the running docker "pause-524446" container ...
	I1101 10:42:36.717986  453808 machine.go:94] provisionDockerMachine start ...
	I1101 10:42:36.718092  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:36.744690  453808 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:36.745151  453808 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1101 10:42:36.745166  453808 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:42:36.913484  453808 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-524446
	
	I1101 10:42:36.913561  453808 ubuntu.go:182] provisioning hostname "pause-524446"
	I1101 10:42:36.913689  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:36.940119  453808 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:36.940435  453808 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1101 10:42:36.940446  453808 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-524446 && echo "pause-524446" | sudo tee /etc/hostname
	I1101 10:42:37.120640  453808 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-524446
	
	I1101 10:42:37.120864  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:37.167370  453808 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:37.167752  453808 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1101 10:42:37.167779  453808 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-524446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-524446/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-524446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:42:37.353274  453808 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:42:37.353352  453808 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:42:37.353394  453808 ubuntu.go:190] setting up certificates
	I1101 10:42:37.353446  453808 provision.go:84] configureAuth start
	I1101 10:42:37.353588  453808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-524446
	I1101 10:42:37.378839  453808 provision.go:143] copyHostCerts
	I1101 10:42:37.378919  453808 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:42:37.378943  453808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:42:37.379068  453808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:42:37.379188  453808 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:42:37.379201  453808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:42:37.379232  453808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:42:37.379300  453808 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:42:37.379310  453808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:42:37.379334  453808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:42:37.379404  453808 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.pause-524446 san=[127.0.0.1 192.168.85.2 localhost minikube pause-524446]
	I1101 10:42:37.491674  453808 provision.go:177] copyRemoteCerts
	I1101 10:42:37.491755  453808 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:42:37.491802  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:37.509713  453808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/pause-524446/id_rsa Username:docker}
	I1101 10:42:37.618247  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:42:37.649497  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:42:37.683432  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:42:37.708861  453808 provision.go:87] duration metric: took 355.381582ms to configureAuth
	I1101 10:42:37.708890  453808 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:42:37.709178  453808 config.go:182] Loaded profile config "pause-524446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:37.709339  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:37.729118  453808 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:37.729495  453808 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1101 10:42:37.729525  453808 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:42:41.837130  439729 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:42:41.837194  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:42:41.837264  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:42:41.871815  439729 cri.go:89] found id: "e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:42:41.871839  439729 cri.go:89] found id: "8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855"
	I1101 10:42:41.871845  439729 cri.go:89] found id: ""
	I1101 10:42:41.871852  439729 logs.go:282] 2 containers: [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5 8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855]
	I1101 10:42:41.871909  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:41.875948  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:41.879658  439729 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:42:41.879754  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:42:41.906810  439729 cri.go:89] found id: ""
	I1101 10:42:41.906837  439729 logs.go:282] 0 containers: []
	W1101 10:42:41.906847  439729 logs.go:284] No container was found matching "etcd"
	I1101 10:42:41.906854  439729 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:42:41.906922  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:42:41.936840  439729 cri.go:89] found id: ""
	I1101 10:42:41.936865  439729 logs.go:282] 0 containers: []
	W1101 10:42:41.936875  439729 logs.go:284] No container was found matching "coredns"
	I1101 10:42:41.936882  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:42:41.936976  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:42:41.964745  439729 cri.go:89] found id: "6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:42:41.964822  439729 cri.go:89] found id: ""
	I1101 10:42:41.964844  439729 logs.go:282] 1 containers: [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5]
	I1101 10:42:41.964959  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:41.968695  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:42:41.968763  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:42:41.994544  439729 cri.go:89] found id: ""
	I1101 10:42:41.994569  439729 logs.go:282] 0 containers: []
	W1101 10:42:41.994578  439729 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:42:41.994585  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:42:41.994651  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:42:42.035413  439729 cri.go:89] found id: "4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:42:42.035435  439729 cri.go:89] found id: ""
	I1101 10:42:42.035443  439729 logs.go:282] 1 containers: [4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200]
	I1101 10:42:42.035501  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:42.039650  439729 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:42:42.039785  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:42:42.067247  439729 cri.go:89] found id: ""
	I1101 10:42:42.067279  439729 logs.go:282] 0 containers: []
	W1101 10:42:42.067289  439729 logs.go:284] No container was found matching "kindnet"
	I1101 10:42:42.067298  439729 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:42:42.067378  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:42:42.102404  439729 cri.go:89] found id: ""
	I1101 10:42:42.102458  439729 logs.go:282] 0 containers: []
	W1101 10:42:42.102486  439729 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:42:42.102509  439729 logs.go:123] Gathering logs for kube-scheduler [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5] ...
	I1101 10:42:42.102546  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:42:42.179899  439729 logs.go:123] Gathering logs for kube-controller-manager [4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200] ...
	I1101 10:42:42.179960  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:42:42.216163  439729 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:42:42.216200  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:42:42.280023  439729 logs.go:123] Gathering logs for kubelet ...
	I1101 10:42:42.280059  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:42:42.405023  439729 logs.go:123] Gathering logs for dmesg ...
	I1101 10:42:42.405069  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:42:42.421759  439729 logs.go:123] Gathering logs for kube-apiserver [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5] ...
	I1101 10:42:42.421791  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:42:42.457919  439729 logs.go:123] Gathering logs for container status ...
	I1101 10:42:42.457952  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:42:42.488051  439729 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:42:42.488087  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1101 10:42:43.085886  453808 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:42:43.085913  453808 machine.go:97] duration metric: took 6.367919016s to provisionDockerMachine
	I1101 10:42:43.085925  453808 start.go:293] postStartSetup for "pause-524446" (driver="docker")
	I1101 10:42:43.085936  453808 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:42:43.085997  453808 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:42:43.086053  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:43.104847  453808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/pause-524446/id_rsa Username:docker}
	I1101 10:42:43.212821  453808 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:42:43.216268  453808 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:42:43.216298  453808 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:42:43.216310  453808 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:42:43.216366  453808 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:42:43.216446  453808 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:42:43.216549  453808 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:42:43.224388  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:42:43.243262  453808 start.go:296] duration metric: took 157.320492ms for postStartSetup
	I1101 10:42:43.243370  453808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:42:43.243422  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:43.260989  453808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/pause-524446/id_rsa Username:docker}
	I1101 10:42:43.362718  453808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:42:43.368427  453808 fix.go:56] duration metric: took 6.679183288s for fixHost
	I1101 10:42:43.368454  453808 start.go:83] releasing machines lock for "pause-524446", held for 6.679233142s
	I1101 10:42:43.368547  453808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-524446
	I1101 10:42:43.388507  453808 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:42:43.388633  453808 ssh_runner.go:195] Run: cat /version.json
	I1101 10:42:43.388673  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:43.388702  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:43.414882  453808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/pause-524446/id_rsa Username:docker}
	I1101 10:42:43.417033  453808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/pause-524446/id_rsa Username:docker}
	I1101 10:42:43.524905  453808 ssh_runner.go:195] Run: systemctl --version
	I1101 10:42:43.616617  453808 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:42:43.656652  453808 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:42:43.662084  453808 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:42:43.662160  453808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:42:43.670388  453808 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:42:43.670423  453808 start.go:496] detecting cgroup driver to use...
	I1101 10:42:43.670465  453808 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:42:43.670525  453808 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:42:43.686009  453808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:42:43.699448  453808 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:42:43.699534  453808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:42:43.715425  453808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:42:43.728904  453808 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:42:43.862122  453808 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:42:44.007389  453808 docker.go:234] disabling docker service ...
	I1101 10:42:44.007554  453808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:42:44.028019  453808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:42:44.041882  453808 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:42:44.180073  453808 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:42:44.308164  453808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:42:44.321753  453808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:42:44.336120  453808 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:42:44.336215  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.349632  453808 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:42:44.349741  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.360187  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.370082  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.379772  453808 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:42:44.389135  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.399097  453808 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.407620  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.417292  453808 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:42:44.425936  453808 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:42:44.433616  453808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:44.573545  453808 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:42:44.754612  453808 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:42:44.754737  453808 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:42:44.758887  453808 start.go:564] Will wait 60s for crictl version
	I1101 10:42:44.758998  453808 ssh_runner.go:195] Run: which crictl
	I1101 10:42:44.762785  453808 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:42:44.788856  453808 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:42:44.789060  453808 ssh_runner.go:195] Run: crio --version
	I1101 10:42:44.816727  453808 ssh_runner.go:195] Run: crio --version
	I1101 10:42:44.849604  453808 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:42:44.852490  453808 cli_runner.go:164] Run: docker network inspect pause-524446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:42:44.868850  453808 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:42:44.872865  453808 kubeadm.go:884] updating cluster {Name:pause-524446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-524446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:42:44.873022  453808 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:44.873084  453808 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:44.909444  453808 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:44.909470  453808 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:42:44.909530  453808 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:44.935756  453808 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:44.935783  453808 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:42:44.935791  453808 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:42:44.935900  453808 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-524446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-524446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:42:44.935982  453808 ssh_runner.go:195] Run: crio config
	I1101 10:42:45.006192  453808 cni.go:84] Creating CNI manager for ""
	I1101 10:42:45.006218  453808 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:45.006244  453808 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:42:45.006271  453808 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-524446 NodeName:pause-524446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:42:45.006436  453808 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-524446"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:42:45.006519  453808 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:42:45.066445  453808 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:42:45.066672  453808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:42:45.094562  453808 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1101 10:42:45.114101  453808 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:42:45.136724  453808 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 10:42:45.179365  453808 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:42:45.192474  453808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:45.447179  453808 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:45.465584  453808 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446 for IP: 192.168.85.2
	I1101 10:42:45.465665  453808 certs.go:195] generating shared ca certs ...
	I1101 10:42:45.465707  453808 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:45.465926  453808 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:42:45.466022  453808 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:42:45.466061  453808 certs.go:257] generating profile certs ...
	I1101 10:42:45.466192  453808 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/client.key
	I1101 10:42:45.466404  453808 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/apiserver.key.bc582bad
	I1101 10:42:45.466569  453808 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/proxy-client.key
	I1101 10:42:45.466756  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:42:45.466837  453808 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:42:45.466884  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:42:45.466987  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:42:45.467081  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:42:45.467154  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:42:45.467244  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:42:45.468080  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:42:45.490958  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:42:45.517030  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:42:45.537271  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:42:45.561651  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:42:45.583938  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:42:45.608427  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:42:45.647863  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:42:45.681734  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:42:45.727913  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:42:45.779259  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:42:45.814759  453808 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:42:45.839815  453808 ssh_runner.go:195] Run: openssl version
	I1101 10:42:45.853516  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:42:45.873474  453808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:42:45.884601  453808 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:42:45.884666  453808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:42:45.963931  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:42:45.979345  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:42:45.991137  453808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:45.995213  453808 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:45.995295  453808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:46.044414  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:42:46.053907  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:42:46.067126  453808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:42:46.071533  453808 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:42:46.071601  453808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:42:46.116830  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:42:46.127733  453808 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:42:46.137295  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:42:46.191513  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:42:46.239613  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:42:46.293795  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:42:46.340565  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:42:46.398981  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:42:46.446178  453808 kubeadm.go:401] StartCluster: {Name:pause-524446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-524446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:46.446318  453808 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:42:46.446388  453808 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:42:46.489853  453808 cri.go:89] found id: "562d7a480bd8b150f3ff5490ca57f085f8cce74515fcca6f6b3a0da9f8c3e804"
	I1101 10:42:46.489878  453808 cri.go:89] found id: "0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0"
	I1101 10:42:46.489885  453808 cri.go:89] found id: "4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace"
	I1101 10:42:46.489889  453808 cri.go:89] found id: "c7cc427c765d5f0268ac160214900f067b479bdb07862e5cff7adbb1edbed5bf"
	I1101 10:42:46.489892  453808 cri.go:89] found id: "c90f5c689ca41496470b62fe52d1d0fbba1fd9dfb4a55c9bb3bc66906f0213d0"
	I1101 10:42:46.489898  453808 cri.go:89] found id: "feb86306de65e800bdcacea118cfeb11cf011bdbd9410d36359d2de63e40e91f"
	I1101 10:42:46.489901  453808 cri.go:89] found id: "e6846cf4faaf8570defebff2c13c97d96e24a6c68f780e2878dfc1550e88dd21"
	I1101 10:42:46.489904  453808 cri.go:89] found id: "4e118b3a4f353c5d20da38b1b32e1892b43e71a1fc32c7794559b9e357567505"
	I1101 10:42:46.489917  453808 cri.go:89] found id: "7af788e2c649b3573c775b3824d9e334bcc1638c0fff42cb56de79e9832c2866"
	I1101 10:42:46.489924  453808 cri.go:89] found id: "d4694a41e4759b5ed3c113f391ee45c1533da5781f43154eb18a5c37c530d6f4"
	I1101 10:42:46.489932  453808 cri.go:89] found id: "d01823d33f87dc24cdc737ad3d46369a7ac999c0e626cba3ebe1039b23c0ea56"
	I1101 10:42:46.489935  453808 cri.go:89] found id: "81746508ca1cda9731c90bead6a9925450ea0a9dbc2627c6c3ccbb245e90b516"
	I1101 10:42:46.489938  453808 cri.go:89] found id: "33b82756faa61b08a4a452bdc72a95129c5eae8d424452ca97c3b65a03880595"
	I1101 10:42:46.489942  453808 cri.go:89] found id: "f367c628d682bb258e0b9abe783adb0a9d4c25ac0de1c1c324d2da0d34b69daa"
	I1101 10:42:46.489945  453808 cri.go:89] found id: ""
	I1101 10:42:46.489997  453808 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:42:46.504698  453808 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:46Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:42:46.504770  453808 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:42:46.516461  453808 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:42:46.516481  453808 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:42:46.516530  453808 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:42:46.527063  453808 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:42:46.527674  453808 kubeconfig.go:125] found "pause-524446" server: "https://192.168.85.2:8443"
	I1101 10:42:46.528448  453808 kapi.go:59] client config for pause-524446: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/client.key", CAFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:42:46.529014  453808 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:42:46.529036  453808 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:42:46.529041  453808 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:42:46.529046  453808 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:42:46.529051  453808 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:42:46.529304  453808 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:42:46.542184  453808 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:42:46.542218  453808 kubeadm.go:602] duration metric: took 25.731219ms to restartPrimaryControlPlane
	I1101 10:42:46.542227  453808 kubeadm.go:403] duration metric: took 96.059024ms to StartCluster
	I1101 10:42:46.542252  453808 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:46.542315  453808 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:42:46.543208  453808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:46.543435  453808 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:42:46.543783  453808 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:42:46.544001  453808 config.go:182] Loaded profile config "pause-524446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:46.547017  453808 out.go:179] * Enabled addons: 
	I1101 10:42:46.547084  453808 out.go:179] * Verifying Kubernetes components...
	I1101 10:42:46.549884  453808 addons.go:515] duration metric: took 6.074381ms for enable addons: enabled=[]
	I1101 10:42:46.549973  453808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:46.763553  453808 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:46.785417  453808 node_ready.go:35] waiting up to 6m0s for node "pause-524446" to be "Ready" ...
	I1101 10:42:49.748839  453808 node_ready.go:49] node "pause-524446" is "Ready"
	I1101 10:42:49.748870  453808 node_ready.go:38] duration metric: took 2.963419333s for node "pause-524446" to be "Ready" ...
	I1101 10:42:49.748886  453808 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:42:49.748981  453808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:42:49.767617  453808 api_server.go:72] duration metric: took 3.224145824s to wait for apiserver process to appear ...
	I1101 10:42:49.767643  453808 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:42:49.767662  453808 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:42:49.789373  453808 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:49.789413  453808 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:50.268658  453808 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:42:50.278732  453808 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:50.278760  453808 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:50.768423  453808 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:42:50.783511  453808 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:50.783573  453808 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:51.267814  453808 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:42:51.276248  453808 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:42:51.277398  453808 api_server.go:141] control plane version: v1.34.1
	I1101 10:42:51.277422  453808 api_server.go:131] duration metric: took 1.509772638s to wait for apiserver health ...
	I1101 10:42:51.277431  453808 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:42:51.282329  453808 system_pods.go:59] 7 kube-system pods found
	I1101 10:42:51.282367  453808 system_pods.go:61] "coredns-66bc5c9577-shkrg" [3264e176-01c9-438e-8c67-40c0ffb8dde7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:51.282378  453808 system_pods.go:61] "etcd-pause-524446" [86b7fc41-8245-4b70-8392-6837d32041a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:51.282384  453808 system_pods.go:61] "kindnet-vfk7j" [fe8582b1-a504-4627-9ed1-7a06468425b9] Running
	I1101 10:42:51.282391  453808 system_pods.go:61] "kube-apiserver-pause-524446" [fb9a8cb5-2b67-4e3f-8aea-355c121060d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:51.282398  453808 system_pods.go:61] "kube-controller-manager-pause-524446" [1251842d-266f-4ff7-bbea-84af20d1594f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:51.282404  453808 system_pods.go:61] "kube-proxy-pjzqn" [379fefcf-57b3-4e29-bfea-91ec14ed93b0] Running
	I1101 10:42:51.282411  453808 system_pods.go:61] "kube-scheduler-pause-524446" [e39fb633-0bd1-4a62-98e9-a649d7309282] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:51.282416  453808 system_pods.go:74] duration metric: took 4.979306ms to wait for pod list to return data ...
	I1101 10:42:51.282435  453808 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:42:51.285017  453808 default_sa.go:45] found service account: "default"
	I1101 10:42:51.285039  453808 default_sa.go:55] duration metric: took 2.597416ms for default service account to be created ...
	I1101 10:42:51.285049  453808 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:42:51.288541  453808 system_pods.go:86] 7 kube-system pods found
	I1101 10:42:51.288622  453808 system_pods.go:89] "coredns-66bc5c9577-shkrg" [3264e176-01c9-438e-8c67-40c0ffb8dde7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:51.288658  453808 system_pods.go:89] "etcd-pause-524446" [86b7fc41-8245-4b70-8392-6837d32041a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:51.288699  453808 system_pods.go:89] "kindnet-vfk7j" [fe8582b1-a504-4627-9ed1-7a06468425b9] Running
	I1101 10:42:51.288728  453808 system_pods.go:89] "kube-apiserver-pause-524446" [fb9a8cb5-2b67-4e3f-8aea-355c121060d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:51.288771  453808 system_pods.go:89] "kube-controller-manager-pause-524446" [1251842d-266f-4ff7-bbea-84af20d1594f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:51.288795  453808 system_pods.go:89] "kube-proxy-pjzqn" [379fefcf-57b3-4e29-bfea-91ec14ed93b0] Running
	I1101 10:42:51.288816  453808 system_pods.go:89] "kube-scheduler-pause-524446" [e39fb633-0bd1-4a62-98e9-a649d7309282] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:51.288858  453808 system_pods.go:126] duration metric: took 3.802047ms to wait for k8s-apps to be running ...
	I1101 10:42:51.288884  453808 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:42:51.288992  453808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:51.304017  453808 system_svc.go:56] duration metric: took 15.123535ms WaitForService to wait for kubelet
	I1101 10:42:51.304096  453808 kubeadm.go:587] duration metric: took 4.76062757s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:42:51.304131  453808 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:42:51.307003  453808 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:42:51.307034  453808 node_conditions.go:123] node cpu capacity is 2
	I1101 10:42:51.307048  453808 node_conditions.go:105] duration metric: took 2.895897ms to run NodePressure ...
	I1101 10:42:51.307060  453808 start.go:242] waiting for startup goroutines ...
	I1101 10:42:51.307068  453808 start.go:247] waiting for cluster config update ...
	I1101 10:42:51.307076  453808 start.go:256] writing updated cluster config ...
	I1101 10:42:51.307384  453808 ssh_runner.go:195] Run: rm -f paused
	I1101 10:42:51.310918  453808 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:51.311534  453808 kapi.go:59] client config for pause-524446: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/client.key", CAFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:42:51.314620  453808 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-shkrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:52.569217  439729 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.081107764s)
	W1101 10:42:52.569257  439729 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1101 10:42:52.569266  439729 logs.go:123] Gathering logs for kube-apiserver [8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855] ...
	I1101 10:42:52.569276  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855"
	W1101 10:42:53.320527  453808 pod_ready.go:104] pod "coredns-66bc5c9577-shkrg" is not "Ready", error: <nil>
	W1101 10:42:55.320988  453808 pod_ready.go:104] pod "coredns-66bc5c9577-shkrg" is not "Ready", error: <nil>
	I1101 10:42:55.106949  439729 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:42:58.162433  439729 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:47938->192.168.76.2:8443: read: connection reset by peer
	I1101 10:42:58.162489  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:42:58.162553  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:42:58.191706  439729 cri.go:89] found id: "e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:42:58.191746  439729 cri.go:89] found id: "8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855"
	I1101 10:42:58.191751  439729 cri.go:89] found id: ""
	I1101 10:42:58.191759  439729 logs.go:282] 2 containers: [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5 8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855]
	I1101 10:42:58.191819  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:58.196456  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:58.200188  439729 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:42:58.200264  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:42:58.229948  439729 cri.go:89] found id: ""
	I1101 10:42:58.229971  439729 logs.go:282] 0 containers: []
	W1101 10:42:58.229979  439729 logs.go:284] No container was found matching "etcd"
	I1101 10:42:58.229986  439729 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:42:58.230055  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:42:58.256547  439729 cri.go:89] found id: ""
	I1101 10:42:58.256576  439729 logs.go:282] 0 containers: []
	W1101 10:42:58.256585  439729 logs.go:284] No container was found matching "coredns"
	I1101 10:42:58.256592  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:42:58.256650  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:42:58.283951  439729 cri.go:89] found id: "6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:42:58.283976  439729 cri.go:89] found id: ""
	I1101 10:42:58.283988  439729 logs.go:282] 1 containers: [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5]
	I1101 10:42:58.284051  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:58.288560  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:42:58.288628  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:42:58.315996  439729 cri.go:89] found id: ""
	I1101 10:42:58.316018  439729 logs.go:282] 0 containers: []
	W1101 10:42:58.316026  439729 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:42:58.316033  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:42:58.316089  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:42:58.343419  439729 cri.go:89] found id: "6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7"
	I1101 10:42:58.343439  439729 cri.go:89] found id: "4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:42:58.343444  439729 cri.go:89] found id: ""
	I1101 10:42:58.343451  439729 logs.go:282] 2 containers: [6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7 4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200]
	I1101 10:42:58.343507  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:58.347493  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:58.351739  439729 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:42:58.351851  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:42:58.379386  439729 cri.go:89] found id: ""
	I1101 10:42:58.379408  439729 logs.go:282] 0 containers: []
	W1101 10:42:58.379417  439729 logs.go:284] No container was found matching "kindnet"
	I1101 10:42:58.379424  439729 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:42:58.379493  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:42:58.409607  439729 cri.go:89] found id: ""
	I1101 10:42:58.409680  439729 logs.go:282] 0 containers: []
	W1101 10:42:58.409695  439729 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:42:58.409710  439729 logs.go:123] Gathering logs for kube-apiserver [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5] ...
	I1101 10:42:58.409727  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:42:58.448466  439729 logs.go:123] Gathering logs for kube-apiserver [8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855] ...
	I1101 10:42:58.448512  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855"
	I1101 10:42:58.491073  439729 logs.go:123] Gathering logs for kube-controller-manager [6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7] ...
	I1101 10:42:58.491104  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7"
	I1101 10:42:58.519590  439729 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:42:58.519618  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:42:58.583449  439729 logs.go:123] Gathering logs for kubelet ...
	I1101 10:42:58.583484  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:42:58.705867  439729 logs.go:123] Gathering logs for kube-scheduler [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5] ...
	I1101 10:42:58.705902  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:42:58.772126  439729 logs.go:123] Gathering logs for kube-controller-manager [4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200] ...
	I1101 10:42:58.772162  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:42:58.810184  439729 logs.go:123] Gathering logs for container status ...
	I1101 10:42:58.810214  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:42:58.859273  439729 logs.go:123] Gathering logs for dmesg ...
	I1101 10:42:58.859303  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:42:58.877184  439729 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:42:58.877212  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:42:58.951440  439729 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:42:57.819992  453808 pod_ready.go:94] pod "coredns-66bc5c9577-shkrg" is "Ready"
	I1101 10:42:57.820017  453808 pod_ready.go:86] duration metric: took 6.505371702s for pod "coredns-66bc5c9577-shkrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:57.822879  453808 pod_ready.go:83] waiting for pod "etcd-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.328877  453808 pod_ready.go:94] pod "etcd-pause-524446" is "Ready"
	I1101 10:42:59.328908  453808 pod_ready.go:86] duration metric: took 1.50600069s for pod "etcd-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.331487  453808 pod_ready.go:83] waiting for pod "kube-apiserver-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.336084  453808 pod_ready.go:94] pod "kube-apiserver-pause-524446" is "Ready"
	I1101 10:42:59.336113  453808 pod_ready.go:86] duration metric: took 4.604483ms for pod "kube-apiserver-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.338597  453808 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.344780  453808 pod_ready.go:94] pod "kube-controller-manager-pause-524446" is "Ready"
	I1101 10:42:59.344809  453808 pod_ready.go:86] duration metric: took 6.18278ms for pod "kube-controller-manager-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.417615  453808 pod_ready.go:83] waiting for pod "kube-proxy-pjzqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.818366  453808 pod_ready.go:94] pod "kube-proxy-pjzqn" is "Ready"
	I1101 10:42:59.818396  453808 pod_ready.go:86] duration metric: took 400.755815ms for pod "kube-proxy-pjzqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:00.111532  453808 pod_ready.go:83] waiting for pod "kube-scheduler-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:43:02.118033  453808 pod_ready.go:104] pod "kube-scheduler-pause-524446" is not "Ready", error: <nil>
	I1101 10:43:03.117690  453808 pod_ready.go:94] pod "kube-scheduler-pause-524446" is "Ready"
	I1101 10:43:03.117720  453808 pod_ready.go:86] duration metric: took 3.006161913s for pod "kube-scheduler-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:03.117734  453808 pod_ready.go:40] duration metric: took 11.806785245s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:43:03.174002  453808 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:43:03.177231  453808 out.go:179] * Done! kubectl is now configured to use "pause-524446" cluster and "default" namespace by default
	I1101 10:43:01.453111  439729 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:43:01.453528  439729 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:43:01.453583  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:43:01.453643  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:43:01.482749  439729 cri.go:89] found id: "e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:43:01.482773  439729 cri.go:89] found id: ""
	I1101 10:43:01.482781  439729 logs.go:282] 1 containers: [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5]
	I1101 10:43:01.482839  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:43:01.486466  439729 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:43:01.486541  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:43:01.514687  439729 cri.go:89] found id: ""
	I1101 10:43:01.514710  439729 logs.go:282] 0 containers: []
	W1101 10:43:01.514718  439729 logs.go:284] No container was found matching "etcd"
	I1101 10:43:01.514730  439729 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:43:01.514792  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:43:01.542325  439729 cri.go:89] found id: ""
	I1101 10:43:01.542348  439729 logs.go:282] 0 containers: []
	W1101 10:43:01.542357  439729 logs.go:284] No container was found matching "coredns"
	I1101 10:43:01.542364  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:43:01.542420  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:43:01.571857  439729 cri.go:89] found id: "6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:43:01.571876  439729 cri.go:89] found id: ""
	I1101 10:43:01.571885  439729 logs.go:282] 1 containers: [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5]
	I1101 10:43:01.571944  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:43:01.576254  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:43:01.576322  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:43:01.602960  439729 cri.go:89] found id: ""
	I1101 10:43:01.602983  439729 logs.go:282] 0 containers: []
	W1101 10:43:01.602991  439729 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:43:01.602998  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:43:01.603060  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:43:01.630085  439729 cri.go:89] found id: "6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7"
	I1101 10:43:01.630106  439729 cri.go:89] found id: "4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:43:01.630111  439729 cri.go:89] found id: ""
	I1101 10:43:01.630119  439729 logs.go:282] 2 containers: [6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7 4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200]
	I1101 10:43:01.630178  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:43:01.634285  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:43:01.637953  439729 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:43:01.638029  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:43:01.668696  439729 cri.go:89] found id: ""
	I1101 10:43:01.668723  439729 logs.go:282] 0 containers: []
	W1101 10:43:01.668732  439729 logs.go:284] No container was found matching "kindnet"
	I1101 10:43:01.668738  439729 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:43:01.668799  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:43:01.698862  439729 cri.go:89] found id: ""
	I1101 10:43:01.698937  439729 logs.go:282] 0 containers: []
	W1101 10:43:01.698964  439729 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:43:01.699003  439729 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:43:01.699038  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:43:01.773211  439729 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:43:01.773233  439729 logs.go:123] Gathering logs for kube-scheduler [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5] ...
	I1101 10:43:01.773260  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:43:01.848952  439729 logs.go:123] Gathering logs for kube-controller-manager [4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200] ...
	I1101 10:43:01.848989  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:43:01.877379  439729 logs.go:123] Gathering logs for container status ...
	I1101 10:43:01.877408  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:43:01.921320  439729 logs.go:123] Gathering logs for kubelet ...
	I1101 10:43:01.921349  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:43:02.037140  439729 logs.go:123] Gathering logs for dmesg ...
	I1101 10:43:02.037181  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:43:02.054505  439729 logs.go:123] Gathering logs for kube-apiserver [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5] ...
	I1101 10:43:02.054534  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:43:02.088171  439729 logs.go:123] Gathering logs for kube-controller-manager [6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7] ...
	I1101 10:43:02.088212  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7"
	I1101 10:43:02.121664  439729 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:43:02.121692  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.860396147Z" level=info msg="Started container" PID=2299 containerID=c7cc427c765d5f0268ac160214900f067b479bdb07862e5cff7adbb1edbed5bf description=kube-system/coredns-66bc5c9577-shkrg/coredns id=c27bb7e3-8fff-45f6-b3a3-9022eaeb8750 name=/runtime.v1.RuntimeService/StartContainer sandboxID=231cb7785eb15972040f1e91279887a59065ad5c8a4c5a1a1e218492c3fba5ba
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.860869826Z" level=info msg="Started container" PID=2319 containerID=562d7a480bd8b150f3ff5490ca57f085f8cce74515fcca6f6b3a0da9f8c3e804 description=kube-system/kindnet-vfk7j/kindnet-cni id=97e83148-05cc-46e6-bf90-6e5a6845c5fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=f55dc37c46c79bf437546a3a3ae207530504aa379f853e440d94350fb13b6a2d
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.864038497Z" level=info msg="Starting container: c90f5c689ca41496470b62fe52d1d0fbba1fd9dfb4a55c9bb3bc66906f0213d0" id=8a89f242-06be-4dd1-aefb-9421e56cdf41 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.894549176Z" level=info msg="Started container" PID=2308 containerID=c90f5c689ca41496470b62fe52d1d0fbba1fd9dfb4a55c9bb3bc66906f0213d0 description=kube-system/kube-controller-manager-pause-524446/kube-controller-manager id=8a89f242-06be-4dd1-aefb-9421e56cdf41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f5a1c79d69de6150ae931168617730eddd360709fbc20b64da56080f311a3a12
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.906407712Z" level=info msg="Created container 0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0: kube-system/kube-apiserver-pause-524446/kube-apiserver" id=56be0c4e-f974-495f-8d4e-985d7b132470 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.90706243Z" level=info msg="Starting container: 0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0" id=d731d617-e284-404d-98f6-d0b791ec36e4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.909472578Z" level=info msg="Started container" PID=2337 containerID=0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0 description=kube-system/kube-apiserver-pause-524446/kube-apiserver id=d731d617-e284-404d-98f6-d0b791ec36e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7418ca25007115fab06e870a34af984f6b9bf48f12be949df6557400d2fa8b5b
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.909668158Z" level=info msg="Created container 4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace: kube-system/kube-scheduler-pause-524446/kube-scheduler" id=135d88ee-97b8-4271-b8b8-293b6e947c54 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.912241631Z" level=info msg="Starting container: 4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace" id=3c4cdd4c-4f69-40d1-adf2-4becc632b639 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.922831157Z" level=info msg="Started container" PID=2325 containerID=4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace description=kube-system/kube-scheduler-pause-524446/kube-scheduler id=3c4cdd4c-4f69-40d1-adf2-4becc632b639 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd0de25ea9ff7b91147192325b49d1f627103c30816628917c436e1990b47ad5
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.229884503Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.233377197Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.233409846Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.233430597Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.24250106Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.242538631Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.242559276Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.261827729Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.26186791Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.261897055Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.273344714Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.273534993Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.273632963Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.278320869Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.278501753Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	562d7a480bd8b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   20 seconds ago       Running             kindnet-cni               1                   f55dc37c46c79       kindnet-vfk7j                          kube-system
	0355684c3c55d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago       Running             kube-apiserver            1                   7418ca2500711       kube-apiserver-pause-524446            kube-system
	4b2174d9ada72       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago       Running             kube-scheduler            1                   fd0de25ea9ff7       kube-scheduler-pause-524446            kube-system
	c7cc427c765d5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   231cb7785eb15       coredns-66bc5c9577-shkrg               kube-system
	c90f5c689ca41       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago       Running             kube-controller-manager   1                   f5a1c79d69de6       kube-controller-manager-pause-524446   kube-system
	feb86306de65e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   20 seconds ago       Running             kube-proxy                1                   0b29e4977bde9       kube-proxy-pjzqn                       kube-system
	e6846cf4faaf8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago       Running             etcd                      1                   23ecc89391305       etcd-pause-524446                      kube-system
	4e118b3a4f353       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   231cb7785eb15       coredns-66bc5c9577-shkrg               kube-system
	7af788e2c649b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   0b29e4977bde9       kube-proxy-pjzqn                       kube-system
	d4694a41e4759       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   f55dc37c46c79       kindnet-vfk7j                          kube-system
	d01823d33f87d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   7418ca2500711       kube-apiserver-pause-524446            kube-system
	81746508ca1cd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   f5a1c79d69de6       kube-controller-manager-pause-524446   kube-system
	33b82756faa61       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   23ecc89391305       etcd-pause-524446                      kube-system
	f367c628d682b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   fd0de25ea9ff7       kube-scheduler-pause-524446            kube-system
	
	
	==> coredns [4e118b3a4f353c5d20da38b1b32e1892b43e71a1fc32c7794559b9e357567505] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60112 - 32743 "HINFO IN 1558694380626053744.1513679670407991713. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013401919s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c7cc427c765d5f0268ac160214900f067b479bdb07862e5cff7adbb1edbed5bf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48443 - 29734 "HINFO IN 5295854242861481314.2168278353610081996. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00462923s
	
	
	==> describe nodes <==
	Name:               pause-524446
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-524446
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=pause-524446
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_41_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:41:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-524446
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:43:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:42:33 +0000   Sat, 01 Nov 2025 10:41:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:42:33 +0000   Sat, 01 Nov 2025 10:41:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:42:33 +0000   Sat, 01 Nov 2025 10:41:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:42:33 +0000   Sat, 01 Nov 2025 10:42:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-524446
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d21f825f-2552-4d1f-a956-ef295a4b598a
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-shkrg                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     74s
	  kube-system                 etcd-pause-524446                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         80s
	  kube-system                 kindnet-vfk7j                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      75s
	  kube-system                 kube-apiserver-pause-524446             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-524446    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-pjzqn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-pause-524446             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 72s   kube-proxy       
	  Normal   Starting                 16s   kube-proxy       
	  Normal   Starting                 79s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 79s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  79s   kubelet          Node pause-524446 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s   kubelet          Node pause-524446 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s   kubelet          Node pause-524446 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           75s   node-controller  Node pause-524446 event: Registered Node pause-524446 in Controller
	  Normal   NodeReady                33s   kubelet          Node pause-524446 status is now: NodeReady
	  Normal   RegisteredNode           13s   node-controller  Node pause-524446 event: Registered Node pause-524446 in Controller
	
	
	==> dmesg <==
	[ +32.523814] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:16] overlayfs: idmapped layers are currently not supported
	[  +4.224848] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.523616] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[ +37.261841] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [33b82756faa61b08a4a452bdc72a95129c5eae8d424452ca97c3b65a03880595] <==
	{"level":"warn","ts":"2025-11-01T10:41:43.065475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.121737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.145851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.176067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.221398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.246381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.403576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34392","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:42:37.922076Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:42:37.922121Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-524446","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-01T10:42:37.922208Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:42:38.124089Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-11-01T10:42:38.124467Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:42:38.124528Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:42:38.124562Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-01T10:42:38.124248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:42:38.124276Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-11-01T10:42:38.124402Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-01T10:42:38.124707Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:42:38.124686Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:42:38.124780Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:42:38.124729Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T10:42:38.128387Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-01T10:42:38.128468Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:42:38.128502Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T10:42:38.128509Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-524446","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [e6846cf4faaf8570defebff2c13c97d96e24a6c68f780e2878dfc1550e88dd21] <==
	{"level":"warn","ts":"2025-11-01T10:42:48.470565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.491173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.511381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.530787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.555438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.566626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.582510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.597912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.615707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.643305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.661807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.679030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.704359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.717239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.741453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.759302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.773795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.801396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.813039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.824262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.847805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.867640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.883488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.907842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.983195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49434","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:43:06 up  2:25,  0 user,  load average: 1.66, 2.46, 2.20
	Linux pause-524446 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [562d7a480bd8b150f3ff5490ca57f085f8cce74515fcca6f6b3a0da9f8c3e804] <==
	I1101 10:42:45.978023       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:42:46.025111       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:42:46.025280       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:42:46.025293       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:42:46.025307       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:42:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:42:46.226973       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:42:46.226990       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:42:46.226999       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:42:46.227306       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:42:49.927832       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:42:49.927944       1 metrics.go:72] Registering metrics
	I1101 10:42:49.928048       1 controller.go:711] "Syncing nftables rules"
	I1101 10:42:56.229412       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:42:56.229541       1 main.go:301] handling current node
	I1101 10:43:06.225650       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:43:06.225722       1 main.go:301] handling current node
	
	
	==> kindnet [d4694a41e4759b5ed3c113f391ee45c1533da5781f43154eb18a5c37c530d6f4] <==
	I1101 10:41:52.533892       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:41:52.544084       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:41:52.544372       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:41:52.544422       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:41:52.544485       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:41:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:41:52.746233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:41:52.746321       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:41:52.746354       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:41:52.750803       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:42:22.746392       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:42:22.747453       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:42:22.747452       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:42:22.747615       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:42:23.946579       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:42:23.946611       1 metrics.go:72] Registering metrics
	I1101 10:42:23.946674       1 controller.go:711] "Syncing nftables rules"
	I1101 10:42:32.747374       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:42:32.747434       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0] <==
	I1101 10:42:49.790975       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:42:49.791322       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:42:49.791677       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:42:49.791752       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:42:49.792154       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:42:49.793151       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:42:49.794148       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:42:49.795159       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:42:49.795253       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:42:49.812556       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:42:49.793151       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:42:49.830425       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:42:49.839007       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:42:49.840241       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:42:49.840332       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:42:49.840365       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:42:49.840407       1 cache.go:39] Caches are synced for autoregister controller
	E1101 10:42:49.866148       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:42:49.875570       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:42:50.598516       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:42:51.753685       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:42:53.184767       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:42:53.399314       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:42:53.498493       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:42:53.548363       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [d01823d33f87dc24cdc737ad3d46369a7ac999c0e626cba3ebe1039b23c0ea56] <==
	W1101 10:42:37.977279       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978060       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978117       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976067       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978074       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978278       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976144       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976103       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976181       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976209       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976234       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976260       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976285       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976313       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976340       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978188       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978217       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978253       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978595       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978626       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978653       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978685       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978710       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978735       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978764       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [81746508ca1cda9731c90bead6a9925450ea0a9dbc2627c6c3ccbb245e90b516] <==
	I1101 10:41:51.356211       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:41:51.361879       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:41:51.362298       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-524446" podCIDRs=["10.244.0.0/24"]
	I1101 10:41:51.365692       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:41:51.375343       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:41:51.377962       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:41:51.391063       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:41:51.393850       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:41:51.394216       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:41:51.394293       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:41:51.394656       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:41:51.394695       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:41:51.397722       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:41:51.397809       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:41:51.398602       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:41:51.398676       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:41:51.398698       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:41:51.399181       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:41:51.399214       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:41:51.399378       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:41:51.404074       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:41:51.404098       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:41:51.404104       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:41:51.407892       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:42:36.351223       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [c90f5c689ca41496470b62fe52d1d0fbba1fd9dfb4a55c9bb3bc66906f0213d0] <==
	I1101 10:42:53.170287       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:42:53.173946       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:42:53.180797       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:42:53.180904       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:42:53.187325       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:42:53.191120       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:42:53.191267       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:42:53.191342       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:42:53.191789       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:42:53.191854       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:42:53.192147       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:42:53.192279       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:42:53.192318       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:42:53.192386       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:42:53.192445       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:42:53.192509       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-524446"
	I1101 10:42:53.192543       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:42:53.192581       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:42:53.193318       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:42:53.199200       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:42:53.201395       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:42:53.204500       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:42:53.206740       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:42:53.209944       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:42:53.212205       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [7af788e2c649b3573c775b3824d9e334bcc1638c0fff42cb56de79e9832c2866] <==
	I1101 10:41:53.661998       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:41:53.770530       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:41:53.870694       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:41:53.870830       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:41:53.870935       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:41:53.904567       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:41:53.904684       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:41:53.914553       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:41:53.914990       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:41:53.915222       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:41:53.916694       1 config.go:200] "Starting service config controller"
	I1101 10:41:53.916761       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:41:53.916805       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:41:53.916833       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:41:53.916874       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:41:53.916901       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:41:53.919075       1 config.go:309] "Starting node config controller"
	I1101 10:41:53.921029       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:41:53.921097       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:41:54.017047       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:41:54.017120       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:41:54.017181       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [feb86306de65e800bdcacea118cfeb11cf011bdbd9410d36359d2de63e40e91f] <==
	I1101 10:42:46.829703       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:42:47.881453       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:42:49.907146       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:42:49.907186       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:42:49.907250       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:42:49.979032       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:42:49.979098       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:42:49.989424       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:42:49.989832       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:42:49.990059       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:49.992152       1 config.go:200] "Starting service config controller"
	I1101 10:42:49.997249       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:42:49.993086       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:42:49.997352       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:42:49.993105       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:42:49.997363       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:42:49.994407       1 config.go:309] "Starting node config controller"
	I1101 10:42:49.997407       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:42:49.997413       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:42:50.098464       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:42:50.098709       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:42:50.098769       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace] <==
	I1101 10:42:48.224121       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:42:49.760688       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:42:49.760726       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:42:49.760736       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:42:49.760743       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:42:49.823526       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:42:49.823676       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:49.827847       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:49.828036       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:49.832126       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:42:49.832224       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:42:49.929274       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f367c628d682bb258e0b9abe783adb0a9d4c25ac0de1c1c324d2da0d34b69daa] <==
	E1101 10:41:45.325843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:41:45.326361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:41:45.326408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:41:45.326463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:41:45.326534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:41:45.326741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:41:45.326822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:41:45.327221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:41:45.327377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:41:45.327513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:41:45.327805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:41:45.327959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:41:45.328049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:41:45.328206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:41:45.328443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:41:45.328639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:41:45.328707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:41:46.226261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 10:41:49.288003       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:37.918418       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:42:37.918441       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:42:37.918456       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:42:37.918484       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:37.918636       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:42:37.918658       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.617872    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="eb9b51bd83e399bd22d655ec3a3be5f0" pod="kube-system/kube-scheduler-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.618092    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="39db5a70ba68144a4abc3e4f370daaf4" pod="kube-system/etcd-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.618290    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22a7a1b2f4684b61d603d3318779309e" pod="kube-system/kube-apiserver-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.618690    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9492c2d6a0fdf3febbfe569a5337abdd" pod="kube-system/kube-controller-manager-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: I1101 10:42:45.620353    1320 scope.go:117] "RemoveContainer" containerID="d01823d33f87dc24cdc737ad3d46369a7ac999c0e626cba3ebe1039b23c0ea56"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.620867    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-shkrg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3264e176-01c9-438e-8c67-40c0ffb8dde7" pod="kube-system/coredns-66bc5c9577-shkrg"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.621124    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="eb9b51bd83e399bd22d655ec3a3be5f0" pod="kube-system/kube-scheduler-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.621345    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="39db5a70ba68144a4abc3e4f370daaf4" pod="kube-system/etcd-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.621624    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22a7a1b2f4684b61d603d3318779309e" pod="kube-system/kube-apiserver-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.621842    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9492c2d6a0fdf3febbfe569a5337abdd" pod="kube-system/kube-controller-manager-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.622062    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjzqn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="379fefcf-57b3-4e29-bfea-91ec14ed93b0" pod="kube-system/kube-proxy-pjzqn"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.622294    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-vfk7j\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fe8582b1-a504-4627-9ed1-7a06468425b9" pod="kube-system/kindnet-vfk7j"
	Nov 01 10:42:47 pause-524446 kubelet[1320]: W1101 10:42:47.545241    1320 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.722837    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-524446\" is forbidden: User \"system:node:pause-524446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" podUID="22a7a1b2f4684b61d603d3318779309e" pod="kube-system/kube-apiserver-pause-524446"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.723044    1320 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-524446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.723066    1320 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-524446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.724205    1320 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-524446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.732604    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-524446\" is forbidden: User \"system:node:pause-524446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" podUID="9492c2d6a0fdf3febbfe569a5337abdd" pod="kube-system/kube-controller-manager-pause-524446"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.744318    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-pjzqn\" is forbidden: User \"system:node:pause-524446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" podUID="379fefcf-57b3-4e29-bfea-91ec14ed93b0" pod="kube-system/kube-proxy-pjzqn"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.750074    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-vfk7j\" is forbidden: User \"system:node:pause-524446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" podUID="fe8582b1-a504-4627-9ed1-7a06468425b9" pod="kube-system/kindnet-vfk7j"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.768735    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-shkrg\" is forbidden: User \"system:node:pause-524446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" podUID="3264e176-01c9-438e-8c67-40c0ffb8dde7" pod="kube-system/coredns-66bc5c9577-shkrg"
	Nov 01 10:42:57 pause-524446 kubelet[1320]: W1101 10:42:57.566827    1320 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 01 10:43:03 pause-524446 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:43:03 pause-524446 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:43:03 pause-524446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-524446 -n pause-524446
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-524446 -n pause-524446: exit status 2 (381.158534ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-524446 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-524446
helpers_test.go:243: (dbg) docker inspect pause-524446:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb",
	        "Created": "2025-11-01T10:41:20.236439178Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 449619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:41:20.307967461Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb/hostname",
	        "HostsPath": "/var/lib/docker/containers/faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb/hosts",
	        "LogPath": "/var/lib/docker/containers/faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb/faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb-json.log",
	        "Name": "/pause-524446",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-524446:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-524446",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "faec4cf7352d247e3cdbae1e8797e3d2a22ff05f77b37a0edf3dc7425bfce4cb",
	                "LowerDir": "/var/lib/docker/overlay2/2e31acb5976be0087e3ede684f65ab6e050f6023357c27b17712a54e3e0726aa-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e31acb5976be0087e3ede684f65ab6e050f6023357c27b17712a54e3e0726aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e31acb5976be0087e3ede684f65ab6e050f6023357c27b17712a54e3e0726aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e31acb5976be0087e3ede684f65ab6e050f6023357c27b17712a54e3e0726aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-524446",
	                "Source": "/var/lib/docker/volumes/pause-524446/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-524446",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-524446",
	                "name.minikube.sigs.k8s.io": "pause-524446",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6574481d97f2bfa3d2cb8e4a7ece5dbbb1a9bce26973b3a5c4a41fca9b872f40",
	            "SandboxKey": "/var/run/docker/netns/6574481d97f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-524446": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:be:99:3c:b9:a5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34b1034fb6de1150d95cb2d577a32e1de985ff9a5dca5f188af786a956aadd65",
	                    "EndpointID": "a02f58532b604ad576b6b4ef75d97da009527e3b798eacce948cb8d50911bab6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-524446",
	                        "faec4cf7352d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-524446 -n pause-524446
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-524446 -n pause-524446: exit status 2 (337.768132ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-524446 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-524446 logs -n 25: (1.522525789s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-276658 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p missing-upgrade-941524 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-941524    │ jenkins │ v1.32.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p NoKubernetes-276658 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ delete  │ -p NoKubernetes-276658                                                                                                                   │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p NoKubernetes-276658 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p missing-upgrade-941524 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-941524    │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:39 UTC │
	│ ssh     │ -p NoKubernetes-276658 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ stop    │ -p NoKubernetes-276658                                                                                                                   │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p NoKubernetes-276658 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ ssh     │ -p NoKubernetes-276658 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ delete  │ -p NoKubernetes-276658                                                                                                                   │ NoKubernetes-276658       │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:39 UTC │
	│ delete  │ -p missing-upgrade-941524                                                                                                                │ missing-upgrade-941524    │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ start   │ -p stopped-upgrade-124684 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-124684    │ jenkins │ v1.32.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ stop    │ -p kubernetes-upgrade-946953                                                                                                             │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │                     │
	│ stop    │ stopped-upgrade-124684 stop                                                                                                              │ stopped-upgrade-124684    │ jenkins │ v1.32.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ start   │ -p stopped-upgrade-124684 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-124684    │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:40 UTC │
	│ delete  │ -p stopped-upgrade-124684                                                                                                                │ stopped-upgrade-124684    │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ start   │ -p running-upgrade-700635 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-700635    │ jenkins │ v1.32.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ start   │ -p running-upgrade-700635 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-700635    │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p running-upgrade-700635                                                                                                                │ running-upgrade-700635    │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p pause-524446 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-524446              │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p pause-524446 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-524446              │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p pause-524446 --alsologtostderr -v=5                                                                                                   │ pause-524446              │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:42:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:42:36.353265  453808 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:42:36.353472  453808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:36.353499  453808 out.go:374] Setting ErrFile to fd 2...
	I1101 10:42:36.353518  453808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:36.353861  453808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:42:36.354281  453808 out.go:368] Setting JSON to false
	I1101 10:42:36.358491  453808 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8708,"bootTime":1761985048,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:42:36.358613  453808 start.go:143] virtualization:  
	I1101 10:42:36.363164  453808 out.go:179] * [pause-524446] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:42:36.366661  453808 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:42:36.366717  453808 notify.go:221] Checking for updates...
	I1101 10:42:36.370470  453808 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:42:36.374144  453808 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:42:36.377298  453808 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:42:36.380597  453808 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:42:36.384555  453808 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:42:36.388373  453808 config.go:182] Loaded profile config "pause-524446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:36.389117  453808 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:42:36.441417  453808 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:42:36.441548  453808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:36.531226  453808 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:42:36.520250994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:42:36.531344  453808 docker.go:319] overlay module found
	I1101 10:42:36.535045  453808 out.go:179] * Using the docker driver based on existing profile
	I1101 10:42:36.538088  453808 start.go:309] selected driver: docker
	I1101 10:42:36.538112  453808 start.go:930] validating driver "docker" against &{Name:pause-524446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-524446 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:36.538241  453808 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:42:36.538360  453808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:36.645234  453808 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:42:36.634613688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:42:36.645639  453808 cni.go:84] Creating CNI manager for ""
	I1101 10:42:36.645709  453808 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:36.645770  453808 start.go:353] cluster config:
	{Name:pause-524446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-524446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:36.650680  453808 out.go:179] * Starting "pause-524446" primary control-plane node in "pause-524446" cluster
	I1101 10:42:36.653533  453808 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:42:36.656455  453808 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:42:36.659339  453808 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:36.659407  453808 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:42:36.659421  453808 cache.go:59] Caching tarball of preloaded images
	I1101 10:42:36.659526  453808 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:42:36.659543  453808 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:42:36.659685  453808 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/config.json ...
	I1101 10:42:36.659932  453808 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:42:36.689096  453808 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:42:36.689119  453808 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:42:36.689133  453808 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:42:36.689156  453808 start.go:360] acquireMachinesLock for pause-524446: {Name:mk848fc020171d62027c0592a514cb787e1e6375 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:42:36.689211  453808 start.go:364] duration metric: took 38.236µs to acquireMachinesLock for "pause-524446"
	I1101 10:42:36.689231  453808 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:42:36.689237  453808 fix.go:54] fixHost starting: 
	I1101 10:42:36.689495  453808 cli_runner.go:164] Run: docker container inspect pause-524446 --format={{.State.Status}}
	I1101 10:42:36.714755  453808 fix.go:112] recreateIfNeeded on pause-524446: state=Running err=<nil>
	W1101 10:42:36.714789  453808 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:42:34.304108  439729 logs.go:123] Gathering logs for container status ...
	I1101 10:42:34.304147  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:42:36.836775  439729 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:42:36.717946  453808 out.go:252] * Updating the running docker "pause-524446" container ...
	I1101 10:42:36.717986  453808 machine.go:94] provisionDockerMachine start ...
	I1101 10:42:36.718092  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:36.744690  453808 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:36.745151  453808 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1101 10:42:36.745166  453808 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:42:36.913484  453808 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-524446
	
	I1101 10:42:36.913561  453808 ubuntu.go:182] provisioning hostname "pause-524446"
	I1101 10:42:36.913689  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:36.940119  453808 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:36.940435  453808 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1101 10:42:36.940446  453808 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-524446 && echo "pause-524446" | sudo tee /etc/hostname
	I1101 10:42:37.120640  453808 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-524446
	
	I1101 10:42:37.120864  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:37.167370  453808 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:37.167752  453808 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1101 10:42:37.167779  453808 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-524446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-524446/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-524446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:42:37.353274  453808 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:42:37.353352  453808 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:42:37.353394  453808 ubuntu.go:190] setting up certificates
	I1101 10:42:37.353446  453808 provision.go:84] configureAuth start
	I1101 10:42:37.353588  453808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-524446
	I1101 10:42:37.378839  453808 provision.go:143] copyHostCerts
	I1101 10:42:37.378919  453808 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:42:37.378943  453808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:42:37.379068  453808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:42:37.379188  453808 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:42:37.379201  453808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:42:37.379232  453808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:42:37.379300  453808 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:42:37.379310  453808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:42:37.379334  453808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:42:37.379404  453808 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.pause-524446 san=[127.0.0.1 192.168.85.2 localhost minikube pause-524446]
	I1101 10:42:37.491674  453808 provision.go:177] copyRemoteCerts
	I1101 10:42:37.491755  453808 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:42:37.491802  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:37.509713  453808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/pause-524446/id_rsa Username:docker}
	I1101 10:42:37.618247  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:42:37.649497  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:42:37.683432  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:42:37.708861  453808 provision.go:87] duration metric: took 355.381582ms to configureAuth
	I1101 10:42:37.708890  453808 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:42:37.709178  453808 config.go:182] Loaded profile config "pause-524446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:37.709339  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:37.729118  453808 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:37.729495  453808 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1101 10:42:37.729525  453808 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:42:41.837130  439729 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:42:41.837194  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:42:41.837264  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:42:41.871815  439729 cri.go:89] found id: "e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:42:41.871839  439729 cri.go:89] found id: "8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855"
	I1101 10:42:41.871845  439729 cri.go:89] found id: ""
	I1101 10:42:41.871852  439729 logs.go:282] 2 containers: [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5 8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855]
	I1101 10:42:41.871909  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:41.875948  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:41.879658  439729 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:42:41.879754  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:42:41.906810  439729 cri.go:89] found id: ""
	I1101 10:42:41.906837  439729 logs.go:282] 0 containers: []
	W1101 10:42:41.906847  439729 logs.go:284] No container was found matching "etcd"
	I1101 10:42:41.906854  439729 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:42:41.906922  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:42:41.936840  439729 cri.go:89] found id: ""
	I1101 10:42:41.936865  439729 logs.go:282] 0 containers: []
	W1101 10:42:41.936875  439729 logs.go:284] No container was found matching "coredns"
	I1101 10:42:41.936882  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:42:41.936976  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:42:41.964745  439729 cri.go:89] found id: "6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:42:41.964822  439729 cri.go:89] found id: ""
	I1101 10:42:41.964844  439729 logs.go:282] 1 containers: [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5]
	I1101 10:42:41.964959  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:41.968695  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:42:41.968763  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:42:41.994544  439729 cri.go:89] found id: ""
	I1101 10:42:41.994569  439729 logs.go:282] 0 containers: []
	W1101 10:42:41.994578  439729 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:42:41.994585  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:42:41.994651  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:42:42.035413  439729 cri.go:89] found id: "4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:42:42.035435  439729 cri.go:89] found id: ""
	I1101 10:42:42.035443  439729 logs.go:282] 1 containers: [4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200]
	I1101 10:42:42.035501  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:42.039650  439729 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:42:42.039785  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:42:42.067247  439729 cri.go:89] found id: ""
	I1101 10:42:42.067279  439729 logs.go:282] 0 containers: []
	W1101 10:42:42.067289  439729 logs.go:284] No container was found matching "kindnet"
	I1101 10:42:42.067298  439729 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:42:42.067378  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:42:42.102404  439729 cri.go:89] found id: ""
	I1101 10:42:42.102458  439729 logs.go:282] 0 containers: []
	W1101 10:42:42.102486  439729 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:42:42.102509  439729 logs.go:123] Gathering logs for kube-scheduler [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5] ...
	I1101 10:42:42.102546  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:42:42.179899  439729 logs.go:123] Gathering logs for kube-controller-manager [4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200] ...
	I1101 10:42:42.179960  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:42:42.216163  439729 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:42:42.216200  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:42:42.280023  439729 logs.go:123] Gathering logs for kubelet ...
	I1101 10:42:42.280059  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:42:42.405023  439729 logs.go:123] Gathering logs for dmesg ...
	I1101 10:42:42.405069  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:42:42.421759  439729 logs.go:123] Gathering logs for kube-apiserver [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5] ...
	I1101 10:42:42.421791  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:42:42.457919  439729 logs.go:123] Gathering logs for container status ...
	I1101 10:42:42.457952  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:42:42.488051  439729 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:42:42.488087  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1101 10:42:43.085886  453808 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:42:43.085913  453808 machine.go:97] duration metric: took 6.367919016s to provisionDockerMachine
	I1101 10:42:43.085925  453808 start.go:293] postStartSetup for "pause-524446" (driver="docker")
	I1101 10:42:43.085936  453808 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:42:43.085997  453808 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:42:43.086053  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:43.104847  453808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/pause-524446/id_rsa Username:docker}
	I1101 10:42:43.212821  453808 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:42:43.216268  453808 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:42:43.216298  453808 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:42:43.216310  453808 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:42:43.216366  453808 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:42:43.216446  453808 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:42:43.216549  453808 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:42:43.224388  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:42:43.243262  453808 start.go:296] duration metric: took 157.320492ms for postStartSetup
	I1101 10:42:43.243370  453808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:42:43.243422  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:43.260989  453808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/pause-524446/id_rsa Username:docker}
	I1101 10:42:43.362718  453808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:42:43.368427  453808 fix.go:56] duration metric: took 6.679183288s for fixHost
	I1101 10:42:43.368454  453808 start.go:83] releasing machines lock for "pause-524446", held for 6.679233142s
	I1101 10:42:43.368547  453808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-524446
	I1101 10:42:43.388507  453808 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:42:43.388633  453808 ssh_runner.go:195] Run: cat /version.json
	I1101 10:42:43.388673  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:43.388702  453808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-524446
	I1101 10:42:43.414882  453808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/pause-524446/id_rsa Username:docker}
	I1101 10:42:43.417033  453808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/pause-524446/id_rsa Username:docker}
	I1101 10:42:43.524905  453808 ssh_runner.go:195] Run: systemctl --version
	I1101 10:42:43.616617  453808 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:42:43.656652  453808 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:42:43.662084  453808 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:42:43.662160  453808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:42:43.670388  453808 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:42:43.670423  453808 start.go:496] detecting cgroup driver to use...
	I1101 10:42:43.670465  453808 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:42:43.670525  453808 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:42:43.686009  453808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:42:43.699448  453808 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:42:43.699534  453808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:42:43.715425  453808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:42:43.728904  453808 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:42:43.862122  453808 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:42:44.007389  453808 docker.go:234] disabling docker service ...
	I1101 10:42:44.007554  453808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:42:44.028019  453808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:42:44.041882  453808 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:42:44.180073  453808 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:42:44.308164  453808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:42:44.321753  453808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:42:44.336120  453808 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:42:44.336215  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.349632  453808 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:42:44.349741  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.360187  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.370082  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.379772  453808 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:42:44.389135  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.399097  453808 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.407620  453808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:44.417292  453808 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:42:44.425936  453808 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:42:44.433616  453808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:44.573545  453808 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:42:44.754612  453808 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:42:44.754737  453808 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:42:44.758887  453808 start.go:564] Will wait 60s for crictl version
	I1101 10:42:44.758998  453808 ssh_runner.go:195] Run: which crictl
	I1101 10:42:44.762785  453808 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:42:44.788856  453808 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:42:44.789060  453808 ssh_runner.go:195] Run: crio --version
	I1101 10:42:44.816727  453808 ssh_runner.go:195] Run: crio --version
	I1101 10:42:44.849604  453808 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:42:44.852490  453808 cli_runner.go:164] Run: docker network inspect pause-524446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:42:44.868850  453808 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:42:44.872865  453808 kubeadm.go:884] updating cluster {Name:pause-524446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-524446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:42:44.873022  453808 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:44.873084  453808 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:44.909444  453808 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:44.909470  453808 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:42:44.909530  453808 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:44.935756  453808 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:44.935783  453808 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:42:44.935791  453808 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:42:44.935900  453808 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-524446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-524446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:42:44.935982  453808 ssh_runner.go:195] Run: crio config
	I1101 10:42:45.006192  453808 cni.go:84] Creating CNI manager for ""
	I1101 10:42:45.006218  453808 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:45.006244  453808 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:42:45.006271  453808 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-524446 NodeName:pause-524446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:42:45.006436  453808 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-524446"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:42:45.006519  453808 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:42:45.066445  453808 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:42:45.066672  453808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:42:45.094562  453808 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1101 10:42:45.114101  453808 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:42:45.136724  453808 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 10:42:45.179365  453808 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:42:45.192474  453808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:45.447179  453808 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:45.465584  453808 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446 for IP: 192.168.85.2
	I1101 10:42:45.465665  453808 certs.go:195] generating shared ca certs ...
	I1101 10:42:45.465707  453808 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:45.465926  453808 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:42:45.466022  453808 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:42:45.466061  453808 certs.go:257] generating profile certs ...
	I1101 10:42:45.466192  453808 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/client.key
	I1101 10:42:45.466404  453808 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/apiserver.key.bc582bad
	I1101 10:42:45.466569  453808 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/proxy-client.key
	I1101 10:42:45.466756  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:42:45.466837  453808 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:42:45.466884  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:42:45.466987  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:42:45.467081  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:42:45.467154  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:42:45.467244  453808 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:42:45.468080  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:42:45.490958  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:42:45.517030  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:42:45.537271  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:42:45.561651  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:42:45.583938  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:42:45.608427  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:42:45.647863  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:42:45.681734  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:42:45.727913  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:42:45.779259  453808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:42:45.814759  453808 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:42:45.839815  453808 ssh_runner.go:195] Run: openssl version
	I1101 10:42:45.853516  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:42:45.873474  453808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:42:45.884601  453808 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:42:45.884666  453808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:42:45.963931  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:42:45.979345  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:42:45.991137  453808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:45.995213  453808 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:45.995295  453808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:46.044414  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:42:46.053907  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:42:46.067126  453808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:42:46.071533  453808 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:42:46.071601  453808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:42:46.116830  453808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:42:46.127733  453808 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:42:46.137295  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:42:46.191513  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:42:46.239613  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:42:46.293795  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:42:46.340565  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:42:46.398981  453808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:42:46.446178  453808 kubeadm.go:401] StartCluster: {Name:pause-524446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-524446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:46.446318  453808 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:42:46.446388  453808 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:42:46.489853  453808 cri.go:89] found id: "562d7a480bd8b150f3ff5490ca57f085f8cce74515fcca6f6b3a0da9f8c3e804"
	I1101 10:42:46.489878  453808 cri.go:89] found id: "0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0"
	I1101 10:42:46.489885  453808 cri.go:89] found id: "4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace"
	I1101 10:42:46.489889  453808 cri.go:89] found id: "c7cc427c765d5f0268ac160214900f067b479bdb07862e5cff7adbb1edbed5bf"
	I1101 10:42:46.489892  453808 cri.go:89] found id: "c90f5c689ca41496470b62fe52d1d0fbba1fd9dfb4a55c9bb3bc66906f0213d0"
	I1101 10:42:46.489898  453808 cri.go:89] found id: "feb86306de65e800bdcacea118cfeb11cf011bdbd9410d36359d2de63e40e91f"
	I1101 10:42:46.489901  453808 cri.go:89] found id: "e6846cf4faaf8570defebff2c13c97d96e24a6c68f780e2878dfc1550e88dd21"
	I1101 10:42:46.489904  453808 cri.go:89] found id: "4e118b3a4f353c5d20da38b1b32e1892b43e71a1fc32c7794559b9e357567505"
	I1101 10:42:46.489917  453808 cri.go:89] found id: "7af788e2c649b3573c775b3824d9e334bcc1638c0fff42cb56de79e9832c2866"
	I1101 10:42:46.489924  453808 cri.go:89] found id: "d4694a41e4759b5ed3c113f391ee45c1533da5781f43154eb18a5c37c530d6f4"
	I1101 10:42:46.489932  453808 cri.go:89] found id: "d01823d33f87dc24cdc737ad3d46369a7ac999c0e626cba3ebe1039b23c0ea56"
	I1101 10:42:46.489935  453808 cri.go:89] found id: "81746508ca1cda9731c90bead6a9925450ea0a9dbc2627c6c3ccbb245e90b516"
	I1101 10:42:46.489938  453808 cri.go:89] found id: "33b82756faa61b08a4a452bdc72a95129c5eae8d424452ca97c3b65a03880595"
	I1101 10:42:46.489942  453808 cri.go:89] found id: "f367c628d682bb258e0b9abe783adb0a9d4c25ac0de1c1c324d2da0d34b69daa"
	I1101 10:42:46.489945  453808 cri.go:89] found id: ""
	I1101 10:42:46.489997  453808 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:42:46.504698  453808 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:46Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:42:46.504770  453808 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:42:46.516461  453808 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:42:46.516481  453808 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:42:46.516530  453808 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:42:46.527063  453808 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:42:46.527674  453808 kubeconfig.go:125] found "pause-524446" server: "https://192.168.85.2:8443"
	I1101 10:42:46.528448  453808 kapi.go:59] client config for pause-524446: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/client.key", CAFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:42:46.529014  453808 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:42:46.529036  453808 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:42:46.529041  453808 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:42:46.529046  453808 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:42:46.529051  453808 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:42:46.529304  453808 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:42:46.542184  453808 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:42:46.542218  453808 kubeadm.go:602] duration metric: took 25.731219ms to restartPrimaryControlPlane
	I1101 10:42:46.542227  453808 kubeadm.go:403] duration metric: took 96.059024ms to StartCluster
	I1101 10:42:46.542252  453808 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:46.542315  453808 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:42:46.543208  453808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:46.543435  453808 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:42:46.543783  453808 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:42:46.544001  453808 config.go:182] Loaded profile config "pause-524446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:46.547017  453808 out.go:179] * Enabled addons: 
	I1101 10:42:46.547084  453808 out.go:179] * Verifying Kubernetes components...
	I1101 10:42:46.549884  453808 addons.go:515] duration metric: took 6.074381ms for enable addons: enabled=[]
	I1101 10:42:46.549973  453808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:46.763553  453808 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:46.785417  453808 node_ready.go:35] waiting up to 6m0s for node "pause-524446" to be "Ready" ...
	I1101 10:42:49.748839  453808 node_ready.go:49] node "pause-524446" is "Ready"
	I1101 10:42:49.748870  453808 node_ready.go:38] duration metric: took 2.963419333s for node "pause-524446" to be "Ready" ...
	I1101 10:42:49.748886  453808 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:42:49.748981  453808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:42:49.767617  453808 api_server.go:72] duration metric: took 3.224145824s to wait for apiserver process to appear ...
	I1101 10:42:49.767643  453808 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:42:49.767662  453808 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:42:49.789373  453808 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:49.789413  453808 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:50.268658  453808 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:42:50.278732  453808 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:50.278760  453808 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:50.768423  453808 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:42:50.783511  453808 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:50.783573  453808 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:51.267814  453808 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:42:51.276248  453808 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:42:51.277398  453808 api_server.go:141] control plane version: v1.34.1
	I1101 10:42:51.277422  453808 api_server.go:131] duration metric: took 1.509772638s to wait for apiserver health ...
	I1101 10:42:51.277431  453808 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:42:51.282329  453808 system_pods.go:59] 7 kube-system pods found
	I1101 10:42:51.282367  453808 system_pods.go:61] "coredns-66bc5c9577-shkrg" [3264e176-01c9-438e-8c67-40c0ffb8dde7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:51.282378  453808 system_pods.go:61] "etcd-pause-524446" [86b7fc41-8245-4b70-8392-6837d32041a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:51.282384  453808 system_pods.go:61] "kindnet-vfk7j" [fe8582b1-a504-4627-9ed1-7a06468425b9] Running
	I1101 10:42:51.282391  453808 system_pods.go:61] "kube-apiserver-pause-524446" [fb9a8cb5-2b67-4e3f-8aea-355c121060d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:51.282398  453808 system_pods.go:61] "kube-controller-manager-pause-524446" [1251842d-266f-4ff7-bbea-84af20d1594f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:51.282404  453808 system_pods.go:61] "kube-proxy-pjzqn" [379fefcf-57b3-4e29-bfea-91ec14ed93b0] Running
	I1101 10:42:51.282411  453808 system_pods.go:61] "kube-scheduler-pause-524446" [e39fb633-0bd1-4a62-98e9-a649d7309282] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:51.282416  453808 system_pods.go:74] duration metric: took 4.979306ms to wait for pod list to return data ...
	I1101 10:42:51.282435  453808 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:42:51.285017  453808 default_sa.go:45] found service account: "default"
	I1101 10:42:51.285039  453808 default_sa.go:55] duration metric: took 2.597416ms for default service account to be created ...
	I1101 10:42:51.285049  453808 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:42:51.288541  453808 system_pods.go:86] 7 kube-system pods found
	I1101 10:42:51.288622  453808 system_pods.go:89] "coredns-66bc5c9577-shkrg" [3264e176-01c9-438e-8c67-40c0ffb8dde7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:51.288658  453808 system_pods.go:89] "etcd-pause-524446" [86b7fc41-8245-4b70-8392-6837d32041a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:51.288699  453808 system_pods.go:89] "kindnet-vfk7j" [fe8582b1-a504-4627-9ed1-7a06468425b9] Running
	I1101 10:42:51.288728  453808 system_pods.go:89] "kube-apiserver-pause-524446" [fb9a8cb5-2b67-4e3f-8aea-355c121060d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:51.288771  453808 system_pods.go:89] "kube-controller-manager-pause-524446" [1251842d-266f-4ff7-bbea-84af20d1594f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:51.288795  453808 system_pods.go:89] "kube-proxy-pjzqn" [379fefcf-57b3-4e29-bfea-91ec14ed93b0] Running
	I1101 10:42:51.288816  453808 system_pods.go:89] "kube-scheduler-pause-524446" [e39fb633-0bd1-4a62-98e9-a649d7309282] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:51.288858  453808 system_pods.go:126] duration metric: took 3.802047ms to wait for k8s-apps to be running ...
	I1101 10:42:51.288884  453808 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:42:51.288992  453808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:51.304017  453808 system_svc.go:56] duration metric: took 15.123535ms WaitForService to wait for kubelet
	I1101 10:42:51.304096  453808 kubeadm.go:587] duration metric: took 4.76062757s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:42:51.304131  453808 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:42:51.307003  453808 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:42:51.307034  453808 node_conditions.go:123] node cpu capacity is 2
	I1101 10:42:51.307048  453808 node_conditions.go:105] duration metric: took 2.895897ms to run NodePressure ...
	I1101 10:42:51.307060  453808 start.go:242] waiting for startup goroutines ...
	I1101 10:42:51.307068  453808 start.go:247] waiting for cluster config update ...
	I1101 10:42:51.307076  453808 start.go:256] writing updated cluster config ...
	I1101 10:42:51.307384  453808 ssh_runner.go:195] Run: rm -f paused
	I1101 10:42:51.310918  453808 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:51.311534  453808 kapi.go:59] client config for pause-524446: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/profiles/pause-524446/client.key", CAFile:"/home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:42:51.314620  453808 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-shkrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:52.569217  439729 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.081107764s)
	W1101 10:42:52.569257  439729 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1101 10:42:52.569266  439729 logs.go:123] Gathering logs for kube-apiserver [8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855] ...
	I1101 10:42:52.569276  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855"
	W1101 10:42:53.320527  453808 pod_ready.go:104] pod "coredns-66bc5c9577-shkrg" is not "Ready", error: <nil>
	W1101 10:42:55.320988  453808 pod_ready.go:104] pod "coredns-66bc5c9577-shkrg" is not "Ready", error: <nil>
	I1101 10:42:55.106949  439729 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:42:58.162433  439729 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:47938->192.168.76.2:8443: read: connection reset by peer
	I1101 10:42:58.162489  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:42:58.162553  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:42:58.191706  439729 cri.go:89] found id: "e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:42:58.191746  439729 cri.go:89] found id: "8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855"
	I1101 10:42:58.191751  439729 cri.go:89] found id: ""
	I1101 10:42:58.191759  439729 logs.go:282] 2 containers: [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5 8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855]
	I1101 10:42:58.191819  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:58.196456  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:58.200188  439729 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:42:58.200264  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:42:58.229948  439729 cri.go:89] found id: ""
	I1101 10:42:58.229971  439729 logs.go:282] 0 containers: []
	W1101 10:42:58.229979  439729 logs.go:284] No container was found matching "etcd"
	I1101 10:42:58.229986  439729 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:42:58.230055  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:42:58.256547  439729 cri.go:89] found id: ""
	I1101 10:42:58.256576  439729 logs.go:282] 0 containers: []
	W1101 10:42:58.256585  439729 logs.go:284] No container was found matching "coredns"
	I1101 10:42:58.256592  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:42:58.256650  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:42:58.283951  439729 cri.go:89] found id: "6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:42:58.283976  439729 cri.go:89] found id: ""
	I1101 10:42:58.283988  439729 logs.go:282] 1 containers: [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5]
	I1101 10:42:58.284051  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:58.288560  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:42:58.288628  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:42:58.315996  439729 cri.go:89] found id: ""
	I1101 10:42:58.316018  439729 logs.go:282] 0 containers: []
	W1101 10:42:58.316026  439729 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:42:58.316033  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:42:58.316089  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:42:58.343419  439729 cri.go:89] found id: "6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7"
	I1101 10:42:58.343439  439729 cri.go:89] found id: "4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:42:58.343444  439729 cri.go:89] found id: ""
	I1101 10:42:58.343451  439729 logs.go:282] 2 containers: [6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7 4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200]
	I1101 10:42:58.343507  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:58.347493  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:42:58.351739  439729 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:42:58.351851  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:42:58.379386  439729 cri.go:89] found id: ""
	I1101 10:42:58.379408  439729 logs.go:282] 0 containers: []
	W1101 10:42:58.379417  439729 logs.go:284] No container was found matching "kindnet"
	I1101 10:42:58.379424  439729 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:42:58.379493  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:42:58.409607  439729 cri.go:89] found id: ""
	I1101 10:42:58.409680  439729 logs.go:282] 0 containers: []
	W1101 10:42:58.409695  439729 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:42:58.409710  439729 logs.go:123] Gathering logs for kube-apiserver [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5] ...
	I1101 10:42:58.409727  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:42:58.448466  439729 logs.go:123] Gathering logs for kube-apiserver [8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855] ...
	I1101 10:42:58.448512  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b4f40ea96bea610d969d3dd40b4042912352e5cbeb171138c6a656b19651855"
	I1101 10:42:58.491073  439729 logs.go:123] Gathering logs for kube-controller-manager [6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7] ...
	I1101 10:42:58.491104  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7"
	I1101 10:42:58.519590  439729 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:42:58.519618  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:42:58.583449  439729 logs.go:123] Gathering logs for kubelet ...
	I1101 10:42:58.583484  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:42:58.705867  439729 logs.go:123] Gathering logs for kube-scheduler [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5] ...
	I1101 10:42:58.705902  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:42:58.772126  439729 logs.go:123] Gathering logs for kube-controller-manager [4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200] ...
	I1101 10:42:58.772162  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:42:58.810184  439729 logs.go:123] Gathering logs for container status ...
	I1101 10:42:58.810214  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:42:58.859273  439729 logs.go:123] Gathering logs for dmesg ...
	I1101 10:42:58.859303  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:42:58.877184  439729 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:42:58.877212  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:42:58.951440  439729 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:42:57.819992  453808 pod_ready.go:94] pod "coredns-66bc5c9577-shkrg" is "Ready"
	I1101 10:42:57.820017  453808 pod_ready.go:86] duration metric: took 6.505371702s for pod "coredns-66bc5c9577-shkrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:57.822879  453808 pod_ready.go:83] waiting for pod "etcd-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.328877  453808 pod_ready.go:94] pod "etcd-pause-524446" is "Ready"
	I1101 10:42:59.328908  453808 pod_ready.go:86] duration metric: took 1.50600069s for pod "etcd-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.331487  453808 pod_ready.go:83] waiting for pod "kube-apiserver-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.336084  453808 pod_ready.go:94] pod "kube-apiserver-pause-524446" is "Ready"
	I1101 10:42:59.336113  453808 pod_ready.go:86] duration metric: took 4.604483ms for pod "kube-apiserver-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.338597  453808 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.344780  453808 pod_ready.go:94] pod "kube-controller-manager-pause-524446" is "Ready"
	I1101 10:42:59.344809  453808 pod_ready.go:86] duration metric: took 6.18278ms for pod "kube-controller-manager-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.417615  453808 pod_ready.go:83] waiting for pod "kube-proxy-pjzqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:59.818366  453808 pod_ready.go:94] pod "kube-proxy-pjzqn" is "Ready"
	I1101 10:42:59.818396  453808 pod_ready.go:86] duration metric: took 400.755815ms for pod "kube-proxy-pjzqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:00.111532  453808 pod_ready.go:83] waiting for pod "kube-scheduler-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:43:02.118033  453808 pod_ready.go:104] pod "kube-scheduler-pause-524446" is not "Ready", error: <nil>
	I1101 10:43:03.117690  453808 pod_ready.go:94] pod "kube-scheduler-pause-524446" is "Ready"
	I1101 10:43:03.117720  453808 pod_ready.go:86] duration metric: took 3.006161913s for pod "kube-scheduler-pause-524446" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:03.117734  453808 pod_ready.go:40] duration metric: took 11.806785245s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:43:03.174002  453808 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:43:03.177231  453808 out.go:179] * Done! kubectl is now configured to use "pause-524446" cluster and "default" namespace by default
	I1101 10:43:01.453111  439729 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:43:01.453528  439729 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:43:01.453583  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:43:01.453643  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:43:01.482749  439729 cri.go:89] found id: "e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:43:01.482773  439729 cri.go:89] found id: ""
	I1101 10:43:01.482781  439729 logs.go:282] 1 containers: [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5]
	I1101 10:43:01.482839  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:43:01.486466  439729 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:43:01.486541  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:43:01.514687  439729 cri.go:89] found id: ""
	I1101 10:43:01.514710  439729 logs.go:282] 0 containers: []
	W1101 10:43:01.514718  439729 logs.go:284] No container was found matching "etcd"
	I1101 10:43:01.514730  439729 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:43:01.514792  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:43:01.542325  439729 cri.go:89] found id: ""
	I1101 10:43:01.542348  439729 logs.go:282] 0 containers: []
	W1101 10:43:01.542357  439729 logs.go:284] No container was found matching "coredns"
	I1101 10:43:01.542364  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:43:01.542420  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:43:01.571857  439729 cri.go:89] found id: "6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:43:01.571876  439729 cri.go:89] found id: ""
	I1101 10:43:01.571885  439729 logs.go:282] 1 containers: [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5]
	I1101 10:43:01.571944  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:43:01.576254  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:43:01.576322  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:43:01.602960  439729 cri.go:89] found id: ""
	I1101 10:43:01.602983  439729 logs.go:282] 0 containers: []
	W1101 10:43:01.602991  439729 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:43:01.602998  439729 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:43:01.603060  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:43:01.630085  439729 cri.go:89] found id: "6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7"
	I1101 10:43:01.630106  439729 cri.go:89] found id: "4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:43:01.630111  439729 cri.go:89] found id: ""
	I1101 10:43:01.630119  439729 logs.go:282] 2 containers: [6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7 4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200]
	I1101 10:43:01.630178  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:43:01.634285  439729 ssh_runner.go:195] Run: which crictl
	I1101 10:43:01.637953  439729 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:43:01.638029  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:43:01.668696  439729 cri.go:89] found id: ""
	I1101 10:43:01.668723  439729 logs.go:282] 0 containers: []
	W1101 10:43:01.668732  439729 logs.go:284] No container was found matching "kindnet"
	I1101 10:43:01.668738  439729 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:43:01.668799  439729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:43:01.698862  439729 cri.go:89] found id: ""
	I1101 10:43:01.698937  439729 logs.go:282] 0 containers: []
	W1101 10:43:01.698964  439729 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:43:01.699003  439729 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:43:01.699038  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:43:01.773211  439729 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:43:01.773233  439729 logs.go:123] Gathering logs for kube-scheduler [6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5] ...
	I1101 10:43:01.773260  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d9640a8adb65dc1abbc7975532ad6ac17e61670fecb5f48a3d77ae2235925e5"
	I1101 10:43:01.848952  439729 logs.go:123] Gathering logs for kube-controller-manager [4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200] ...
	I1101 10:43:01.848989  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4098a0a2bdc4c83274a7fe5e8f921cfbe03825165ca2aa365e34efe610682200"
	I1101 10:43:01.877379  439729 logs.go:123] Gathering logs for container status ...
	I1101 10:43:01.877408  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:43:01.921320  439729 logs.go:123] Gathering logs for kubelet ...
	I1101 10:43:01.921349  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:43:02.037140  439729 logs.go:123] Gathering logs for dmesg ...
	I1101 10:43:02.037181  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:43:02.054505  439729 logs.go:123] Gathering logs for kube-apiserver [e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5] ...
	I1101 10:43:02.054534  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e809eaa682342bfa80bb6b44e8bf7685f951d1c9262e0e4c5d9a035973b655a5"
	I1101 10:43:02.088171  439729 logs.go:123] Gathering logs for kube-controller-manager [6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7] ...
	I1101 10:43:02.088212  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b859572ce362d961a6199cfb3320d1d88a99e9cae6c1712074e90aee14652b7"
	I1101 10:43:02.121664  439729 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:43:02.121692  439729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.860396147Z" level=info msg="Started container" PID=2299 containerID=c7cc427c765d5f0268ac160214900f067b479bdb07862e5cff7adbb1edbed5bf description=kube-system/coredns-66bc5c9577-shkrg/coredns id=c27bb7e3-8fff-45f6-b3a3-9022eaeb8750 name=/runtime.v1.RuntimeService/StartContainer sandboxID=231cb7785eb15972040f1e91279887a59065ad5c8a4c5a1a1e218492c3fba5ba
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.860869826Z" level=info msg="Started container" PID=2319 containerID=562d7a480bd8b150f3ff5490ca57f085f8cce74515fcca6f6b3a0da9f8c3e804 description=kube-system/kindnet-vfk7j/kindnet-cni id=97e83148-05cc-46e6-bf90-6e5a6845c5fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=f55dc37c46c79bf437546a3a3ae207530504aa379f853e440d94350fb13b6a2d
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.864038497Z" level=info msg="Starting container: c90f5c689ca41496470b62fe52d1d0fbba1fd9dfb4a55c9bb3bc66906f0213d0" id=8a89f242-06be-4dd1-aefb-9421e56cdf41 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.894549176Z" level=info msg="Started container" PID=2308 containerID=c90f5c689ca41496470b62fe52d1d0fbba1fd9dfb4a55c9bb3bc66906f0213d0 description=kube-system/kube-controller-manager-pause-524446/kube-controller-manager id=8a89f242-06be-4dd1-aefb-9421e56cdf41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f5a1c79d69de6150ae931168617730eddd360709fbc20b64da56080f311a3a12
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.906407712Z" level=info msg="Created container 0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0: kube-system/kube-apiserver-pause-524446/kube-apiserver" id=56be0c4e-f974-495f-8d4e-985d7b132470 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.90706243Z" level=info msg="Starting container: 0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0" id=d731d617-e284-404d-98f6-d0b791ec36e4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.909472578Z" level=info msg="Started container" PID=2337 containerID=0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0 description=kube-system/kube-apiserver-pause-524446/kube-apiserver id=d731d617-e284-404d-98f6-d0b791ec36e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7418ca25007115fab06e870a34af984f6b9bf48f12be949df6557400d2fa8b5b
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.909668158Z" level=info msg="Created container 4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace: kube-system/kube-scheduler-pause-524446/kube-scheduler" id=135d88ee-97b8-4271-b8b8-293b6e947c54 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.912241631Z" level=info msg="Starting container: 4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace" id=3c4cdd4c-4f69-40d1-adf2-4becc632b639 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:45 pause-524446 crio[2082]: time="2025-11-01T10:42:45.922831157Z" level=info msg="Started container" PID=2325 containerID=4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace description=kube-system/kube-scheduler-pause-524446/kube-scheduler id=3c4cdd4c-4f69-40d1-adf2-4becc632b639 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd0de25ea9ff7b91147192325b49d1f627103c30816628917c436e1990b47ad5
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.229884503Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.233377197Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.233409846Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.233430597Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.24250106Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.242538631Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.242559276Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.261827729Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.26186791Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.261897055Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.273344714Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.273534993Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.273632963Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.278320869Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:42:56 pause-524446 crio[2082]: time="2025-11-01T10:42:56.278501753Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	562d7a480bd8b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   f55dc37c46c79       kindnet-vfk7j                          kube-system
	0355684c3c55d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   7418ca2500711       kube-apiserver-pause-524446            kube-system
	4b2174d9ada72       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   fd0de25ea9ff7       kube-scheduler-pause-524446            kube-system
	c7cc427c765d5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   231cb7785eb15       coredns-66bc5c9577-shkrg               kube-system
	c90f5c689ca41       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   f5a1c79d69de6       kube-controller-manager-pause-524446   kube-system
	feb86306de65e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   0b29e4977bde9       kube-proxy-pjzqn                       kube-system
	e6846cf4faaf8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   23ecc89391305       etcd-pause-524446                      kube-system
	4e118b3a4f353       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   231cb7785eb15       coredns-66bc5c9577-shkrg               kube-system
	7af788e2c649b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   0b29e4977bde9       kube-proxy-pjzqn                       kube-system
	d4694a41e4759       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   f55dc37c46c79       kindnet-vfk7j                          kube-system
	d01823d33f87d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   7418ca2500711       kube-apiserver-pause-524446            kube-system
	81746508ca1cd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   f5a1c79d69de6       kube-controller-manager-pause-524446   kube-system
	33b82756faa61       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   23ecc89391305       etcd-pause-524446                      kube-system
	f367c628d682b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   fd0de25ea9ff7       kube-scheduler-pause-524446            kube-system
	
	
	==> coredns [4e118b3a4f353c5d20da38b1b32e1892b43e71a1fc32c7794559b9e357567505] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60112 - 32743 "HINFO IN 1558694380626053744.1513679670407991713. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013401919s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c7cc427c765d5f0268ac160214900f067b479bdb07862e5cff7adbb1edbed5bf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48443 - 29734 "HINFO IN 5295854242861481314.2168278353610081996. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00462923s
	
	
	==> describe nodes <==
	Name:               pause-524446
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-524446
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=pause-524446
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_41_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:41:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-524446
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:43:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:42:33 +0000   Sat, 01 Nov 2025 10:41:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:42:33 +0000   Sat, 01 Nov 2025 10:41:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:42:33 +0000   Sat, 01 Nov 2025 10:41:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:42:33 +0000   Sat, 01 Nov 2025 10:42:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-524446
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d21f825f-2552-4d1f-a956-ef295a4b598a
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-shkrg                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-524446                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-vfk7j                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-pause-524446             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-524446    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-pjzqn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-524446             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 74s   kube-proxy       
	  Normal   Starting                 18s   kube-proxy       
	  Normal   Starting                 81s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s   kubelet          Node pause-524446 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s   kubelet          Node pause-524446 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s   kubelet          Node pause-524446 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s   node-controller  Node pause-524446 event: Registered Node pause-524446 in Controller
	  Normal   NodeReady                35s   kubelet          Node pause-524446 status is now: NodeReady
	  Normal   RegisteredNode           15s   node-controller  Node pause-524446 event: Registered Node pause-524446 in Controller
	
	
	==> dmesg <==
	[ +32.523814] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:16] overlayfs: idmapped layers are currently not supported
	[  +4.224848] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.523616] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[ +37.261841] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [33b82756faa61b08a4a452bdc72a95129c5eae8d424452ca97c3b65a03880595] <==
	{"level":"warn","ts":"2025-11-01T10:41:43.065475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.121737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.145851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.176067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.221398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.246381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:43.403576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34392","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:42:37.922076Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:42:37.922121Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-524446","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-01T10:42:37.922208Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:42:38.124089Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-11-01T10:42:38.124467Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:42:38.124528Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:42:38.124562Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-01T10:42:38.124248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:42:38.124276Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-11-01T10:42:38.124402Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-01T10:42:38.124707Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:42:38.124686Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:42:38.124780Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:42:38.124729Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T10:42:38.128387Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-01T10:42:38.128468Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:42:38.128502Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T10:42:38.128509Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-524446","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [e6846cf4faaf8570defebff2c13c97d96e24a6c68f780e2878dfc1550e88dd21] <==
	{"level":"warn","ts":"2025-11-01T10:42:48.470565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.491173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.511381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.530787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.555438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.566626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.582510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.597912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.615707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.643305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.661807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.679030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.704359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.717239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.741453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.759302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.773795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.801396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.813039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.824262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.847805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.867640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.883488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.907842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:48.983195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49434","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:43:08 up  2:25,  0 user,  load average: 1.66, 2.46, 2.20
	Linux pause-524446 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [562d7a480bd8b150f3ff5490ca57f085f8cce74515fcca6f6b3a0da9f8c3e804] <==
	I1101 10:42:45.978023       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:42:46.025111       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:42:46.025280       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:42:46.025293       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:42:46.025307       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:42:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:42:46.226973       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:42:46.226990       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:42:46.226999       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:42:46.227306       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:42:49.927832       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:42:49.927944       1 metrics.go:72] Registering metrics
	I1101 10:42:49.928048       1 controller.go:711] "Syncing nftables rules"
	I1101 10:42:56.229412       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:42:56.229541       1 main.go:301] handling current node
	I1101 10:43:06.225650       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:43:06.225722       1 main.go:301] handling current node
	
	
	==> kindnet [d4694a41e4759b5ed3c113f391ee45c1533da5781f43154eb18a5c37c530d6f4] <==
	I1101 10:41:52.533892       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:41:52.544084       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:41:52.544372       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:41:52.544422       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:41:52.544485       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:41:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:41:52.746233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:41:52.746321       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:41:52.746354       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:41:52.750803       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:42:22.746392       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:42:22.747453       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:42:22.747452       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:42:22.747615       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:42:23.946579       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:42:23.946611       1 metrics.go:72] Registering metrics
	I1101 10:42:23.946674       1 controller.go:711] "Syncing nftables rules"
	I1101 10:42:32.747374       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:42:32.747434       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0355684c3c55d688e54737e80f9916e991d2d50d8c6a66bbc210cb14a3a724b0] <==
	I1101 10:42:49.790975       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:42:49.791322       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:42:49.791677       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:42:49.791752       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:42:49.792154       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:42:49.793151       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:42:49.794148       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:42:49.795159       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:42:49.795253       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:42:49.812556       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:42:49.793151       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:42:49.830425       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:42:49.839007       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:42:49.840241       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:42:49.840332       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:42:49.840365       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:42:49.840407       1 cache.go:39] Caches are synced for autoregister controller
	E1101 10:42:49.866148       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:42:49.875570       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:42:50.598516       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:42:51.753685       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:42:53.184767       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:42:53.399314       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:42:53.498493       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:42:53.548363       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [d01823d33f87dc24cdc737ad3d46369a7ac999c0e626cba3ebe1039b23c0ea56] <==
	W1101 10:42:37.977279       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978060       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978117       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976067       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978074       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978278       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976144       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976103       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976181       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976209       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976234       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976260       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976285       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976313       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.976340       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978188       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978217       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978253       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978595       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978626       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978653       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978685       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978710       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978735       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:42:37.978764       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [81746508ca1cda9731c90bead6a9925450ea0a9dbc2627c6c3ccbb245e90b516] <==
	I1101 10:41:51.356211       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:41:51.361879       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:41:51.362298       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-524446" podCIDRs=["10.244.0.0/24"]
	I1101 10:41:51.365692       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:41:51.375343       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:41:51.377962       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:41:51.391063       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:41:51.393850       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:41:51.394216       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:41:51.394293       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:41:51.394656       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:41:51.394695       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:41:51.397722       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:41:51.397809       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:41:51.398602       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:41:51.398676       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:41:51.398698       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:41:51.399181       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:41:51.399214       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:41:51.399378       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:41:51.404074       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:41:51.404098       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:41:51.404104       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:41:51.407892       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:42:36.351223       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [c90f5c689ca41496470b62fe52d1d0fbba1fd9dfb4a55c9bb3bc66906f0213d0] <==
	I1101 10:42:53.170287       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:42:53.173946       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:42:53.180797       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:42:53.180904       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:42:53.187325       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:42:53.191120       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:42:53.191267       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:42:53.191342       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:42:53.191789       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:42:53.191854       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:42:53.192147       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:42:53.192279       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:42:53.192318       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:42:53.192386       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:42:53.192445       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:42:53.192509       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-524446"
	I1101 10:42:53.192543       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:42:53.192581       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:42:53.193318       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:42:53.199200       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:42:53.201395       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:42:53.204500       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:42:53.206740       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:42:53.209944       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:42:53.212205       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [7af788e2c649b3573c775b3824d9e334bcc1638c0fff42cb56de79e9832c2866] <==
	I1101 10:41:53.661998       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:41:53.770530       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:41:53.870694       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:41:53.870830       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:41:53.870935       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:41:53.904567       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:41:53.904684       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:41:53.914553       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:41:53.914990       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:41:53.915222       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:41:53.916694       1 config.go:200] "Starting service config controller"
	I1101 10:41:53.916761       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:41:53.916805       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:41:53.916833       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:41:53.916874       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:41:53.916901       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:41:53.919075       1 config.go:309] "Starting node config controller"
	I1101 10:41:53.921029       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:41:53.921097       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:41:54.017047       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:41:54.017120       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:41:54.017181       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [feb86306de65e800bdcacea118cfeb11cf011bdbd9410d36359d2de63e40e91f] <==
	I1101 10:42:46.829703       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:42:47.881453       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:42:49.907146       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:42:49.907186       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:42:49.907250       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:42:49.979032       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:42:49.979098       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:42:49.989424       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:42:49.989832       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:42:49.990059       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:49.992152       1 config.go:200] "Starting service config controller"
	I1101 10:42:49.997249       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:42:49.993086       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:42:49.997352       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:42:49.993105       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:42:49.997363       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:42:49.994407       1 config.go:309] "Starting node config controller"
	I1101 10:42:49.997407       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:42:49.997413       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:42:50.098464       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:42:50.098709       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:42:50.098769       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4b2174d9ada72018ee6dbeedebd9756b4158a3affb8fb211e54118b8ac4ceace] <==
	I1101 10:42:48.224121       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:42:49.760688       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:42:49.760726       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:42:49.760736       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:42:49.760743       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:42:49.823526       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:42:49.823676       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:49.827847       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:49.828036       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:49.832126       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:42:49.832224       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:42:49.929274       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f367c628d682bb258e0b9abe783adb0a9d4c25ac0de1c1c324d2da0d34b69daa] <==
	E1101 10:41:45.325843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:41:45.326361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:41:45.326408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:41:45.326463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:41:45.326534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:41:45.326741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:41:45.326822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:41:45.327221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:41:45.327377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:41:45.327513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:41:45.327805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:41:45.327959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:41:45.328049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:41:45.328206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:41:45.328443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:41:45.328639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:41:45.328707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:41:46.226261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 10:41:49.288003       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:37.918418       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:42:37.918441       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:42:37.918456       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:42:37.918484       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:37.918636       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:42:37.918658       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.617872    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="eb9b51bd83e399bd22d655ec3a3be5f0" pod="kube-system/kube-scheduler-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.618092    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="39db5a70ba68144a4abc3e4f370daaf4" pod="kube-system/etcd-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.618290    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22a7a1b2f4684b61d603d3318779309e" pod="kube-system/kube-apiserver-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.618690    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9492c2d6a0fdf3febbfe569a5337abdd" pod="kube-system/kube-controller-manager-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: I1101 10:42:45.620353    1320 scope.go:117] "RemoveContainer" containerID="d01823d33f87dc24cdc737ad3d46369a7ac999c0e626cba3ebe1039b23c0ea56"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.620867    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-shkrg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3264e176-01c9-438e-8c67-40c0ffb8dde7" pod="kube-system/coredns-66bc5c9577-shkrg"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.621124    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="eb9b51bd83e399bd22d655ec3a3be5f0" pod="kube-system/kube-scheduler-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.621345    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="39db5a70ba68144a4abc3e4f370daaf4" pod="kube-system/etcd-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.621624    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22a7a1b2f4684b61d603d3318779309e" pod="kube-system/kube-apiserver-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.621842    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-524446\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9492c2d6a0fdf3febbfe569a5337abdd" pod="kube-system/kube-controller-manager-pause-524446"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.622062    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjzqn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="379fefcf-57b3-4e29-bfea-91ec14ed93b0" pod="kube-system/kube-proxy-pjzqn"
	Nov 01 10:42:45 pause-524446 kubelet[1320]: E1101 10:42:45.622294    1320 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-vfk7j\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fe8582b1-a504-4627-9ed1-7a06468425b9" pod="kube-system/kindnet-vfk7j"
	Nov 01 10:42:47 pause-524446 kubelet[1320]: W1101 10:42:47.545241    1320 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.722837    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-524446\" is forbidden: User \"system:node:pause-524446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" podUID="22a7a1b2f4684b61d603d3318779309e" pod="kube-system/kube-apiserver-pause-524446"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.723044    1320 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-524446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.723066    1320 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-524446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.724205    1320 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-524446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.732604    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-524446\" is forbidden: User \"system:node:pause-524446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" podUID="9492c2d6a0fdf3febbfe569a5337abdd" pod="kube-system/kube-controller-manager-pause-524446"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.744318    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-pjzqn\" is forbidden: User \"system:node:pause-524446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" podUID="379fefcf-57b3-4e29-bfea-91ec14ed93b0" pod="kube-system/kube-proxy-pjzqn"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.750074    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-vfk7j\" is forbidden: User \"system:node:pause-524446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" podUID="fe8582b1-a504-4627-9ed1-7a06468425b9" pod="kube-system/kindnet-vfk7j"
	Nov 01 10:42:49 pause-524446 kubelet[1320]: E1101 10:42:49.768735    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-shkrg\" is forbidden: User \"system:node:pause-524446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-524446' and this object" podUID="3264e176-01c9-438e-8c67-40c0ffb8dde7" pod="kube-system/coredns-66bc5c9577-shkrg"
	Nov 01 10:42:57 pause-524446 kubelet[1320]: W1101 10:42:57.566827    1320 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 01 10:43:03 pause-524446 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:43:03 pause-524446 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:43:03 pause-524446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-524446 -n pause-524446
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-524446 -n pause-524446: exit status 2 (366.407605ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-524446 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-245622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-245622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (285.923309ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:46:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-245622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-245622 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-245622 describe deploy/metrics-server -n kube-system: exit status 1 (83.883429ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-245622 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-245622
helpers_test.go:243: (dbg) docker inspect old-k8s-version-245622:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3",
	        "Created": "2025-11-01T10:45:35.000054348Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471380,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:45:35.094334147Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/hosts",
	        "LogPath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3-json.log",
	        "Name": "/old-k8s-version-245622",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-245622:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-245622",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3",
	                "LowerDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-245622",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-245622/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-245622",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-245622",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-245622",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "59aa83a72f0717087c992e60faa5fab4f1f6777413966232e6df3e5208905dad",
	            "SandboxKey": "/var/run/docker/netns/59aa83a72f07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-245622": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:81:85:7f:25:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "886e51d4881a05bd8806566eef0c793a83105f195753997f1581ba0395c0dfba",
	                    "EndpointID": "c35ca679bb023c2ad496a4f685b8a0c64a396bbb617e05520cce2add9ee9c2d4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-245622",
	                        "c9c5181d464a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-245622 -n old-k8s-version-245622
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-245622 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-245622 logs -n 25: (1.306247449s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-883951 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo containerd config dump                                                                                                                                                                                                  │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo crio config                                                                                                                                                                                                             │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ delete  │ -p cilium-883951                                                                                                                                                                                                                              │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p force-systemd-env-555657 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-555657  │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ delete  │ -p kubernetes-upgrade-946953                                                                                                                                                                                                                  │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ delete  │ -p force-systemd-env-555657                                                                                                                                                                                                                   │ force-systemd-env-555657  │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-308600    │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p cert-options-186677 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:45 UTC │
	│ ssh     │ cert-options-186677 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ ssh     │ -p cert-options-186677 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ delete  │ -p cert-options-186677                                                                                                                                                                                                                        │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-245622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:45:28
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:45:28.616600  470791 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:45:28.616826  470791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:45:28.616840  470791 out.go:374] Setting ErrFile to fd 2...
	I1101 10:45:28.616845  470791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:45:28.617335  470791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:45:28.617823  470791 out.go:368] Setting JSON to false
	I1101 10:45:28.619217  470791 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8881,"bootTime":1761985048,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:45:28.619292  470791 start.go:143] virtualization:  
	I1101 10:45:28.623202  470791 out.go:179] * [old-k8s-version-245622] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:45:28.627065  470791 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:45:28.627269  470791 notify.go:221] Checking for updates...
	I1101 10:45:28.631286  470791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:45:28.634455  470791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:45:28.638769  470791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:45:28.642600  470791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:45:28.645777  470791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:45:28.591309  465845 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-308600" context rescaled to 1 replicas
	I1101 10:45:28.717125  465845 api_server.go:72] duration metric: took 1.667060098s to wait for apiserver process to appear ...
	I1101 10:45:28.717139  465845 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:45:28.717158  465845 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:45:28.717335  465845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065274043s)
	I1101 10:45:28.722885  465845 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 10:45:28.650156  470791 config.go:182] Loaded profile config "cert-expiration-308600": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:45:28.650347  470791 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:45:28.697681  470791 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:45:28.697813  470791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:45:28.827351  470791 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:45:28.816975864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:45:28.827458  470791 docker.go:319] overlay module found
	I1101 10:45:28.830506  470791 out.go:179] * Using the docker driver based on user configuration
	I1101 10:45:28.731466  465845 addons.go:515] duration metric: took 1.681058747s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 10:45:28.748115  465845 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:45:28.750743  465845 api_server.go:141] control plane version: v1.34.1
	I1101 10:45:28.750764  465845 api_server.go:131] duration metric: took 33.617363ms to wait for apiserver health ...
	I1101 10:45:28.750773  465845 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:45:28.769305  465845 system_pods.go:59] 5 kube-system pods found
	I1101 10:45:28.769330  465845 system_pods.go:61] "etcd-cert-expiration-308600" [6cf2c79e-33c4-4239-81c2-efa06533c42a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:45:28.769340  465845 system_pods.go:61] "kube-apiserver-cert-expiration-308600" [72a2b169-b101-41ad-bd4f-d78a4cba119e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:45:28.769347  465845 system_pods.go:61] "kube-controller-manager-cert-expiration-308600" [c5feb1f1-a5aa-4a67-a3a1-68c2e79e0a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:45:28.769354  465845 system_pods.go:61] "kube-scheduler-cert-expiration-308600" [9ae25045-8bce-4946-96ff-ee21fb259869] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:45:28.769359  465845 system_pods.go:61] "storage-provisioner" [5d450ea6-b179-4fcc-b0bf-d74c2b424722] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:45:28.769365  465845 system_pods.go:74] duration metric: took 18.586868ms to wait for pod list to return data ...
	I1101 10:45:28.769380  465845 kubeadm.go:587] duration metric: took 1.71932947s to wait for: map[apiserver:true system_pods:true]
	I1101 10:45:28.769394  465845 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:45:28.781126  465845 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:45:28.781148  465845 node_conditions.go:123] node cpu capacity is 2
	I1101 10:45:28.781161  465845 node_conditions.go:105] duration metric: took 11.762886ms to run NodePressure ...
	I1101 10:45:28.781172  465845 start.go:242] waiting for startup goroutines ...
	I1101 10:45:28.781179  465845 start.go:247] waiting for cluster config update ...
	I1101 10:45:28.781188  465845 start.go:256] writing updated cluster config ...
	I1101 10:45:28.781495  465845 ssh_runner.go:195] Run: rm -f paused
	I1101 10:45:28.890979  465845 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:45:28.894152  465845 out.go:179] * Done! kubectl is now configured to use "cert-expiration-308600" cluster and "default" namespace by default
	I1101 10:45:28.833496  470791 start.go:309] selected driver: docker
	I1101 10:45:28.833518  470791 start.go:930] validating driver "docker" against <nil>
	I1101 10:45:28.833532  470791 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:45:28.834266  470791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:45:28.940128  470791 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:45:28.914360242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:45:28.940274  470791 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:45:28.940512  470791 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:45:28.943264  470791 out.go:179] * Using Docker driver with root privileges
	I1101 10:45:28.947071  470791 cni.go:84] Creating CNI manager for ""
	I1101 10:45:28.947141  470791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:45:28.947152  470791 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:45:28.947236  470791 start.go:353] cluster config:
	{Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:45:28.950437  470791 out.go:179] * Starting "old-k8s-version-245622" primary control-plane node in "old-k8s-version-245622" cluster
	I1101 10:45:28.953315  470791 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:45:28.956459  470791 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:45:28.959443  470791 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:45:28.959518  470791 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 10:45:28.959532  470791 cache.go:59] Caching tarball of preloaded images
	I1101 10:45:28.959544  470791 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:45:28.959626  470791 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:45:28.959637  470791 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 10:45:28.959765  470791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/config.json ...
	I1101 10:45:28.959787  470791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/config.json: {Name:mk095ac17eb388be8ccae28684967ad260852177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:45:28.980124  470791 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:45:28.980148  470791 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:45:28.980174  470791 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:45:28.980741  470791 start.go:360] acquireMachinesLock for old-k8s-version-245622: {Name:mkfbe1634de833e16a5a7580b9fd5f9c75eacf88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:45:28.980907  470791 start.go:364] duration metric: took 128.707µs to acquireMachinesLock for "old-k8s-version-245622"
	I1101 10:45:28.980989  470791 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:45:28.981076  470791 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:45:28.985083  470791 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:45:28.985492  470791 start.go:159] libmachine.API.Create for "old-k8s-version-245622" (driver="docker")
	I1101 10:45:28.985540  470791 client.go:173] LocalClient.Create starting
	I1101 10:45:28.985656  470791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem
	I1101 10:45:28.985708  470791 main.go:143] libmachine: Decoding PEM data...
	I1101 10:45:28.985729  470791 main.go:143] libmachine: Parsing certificate...
	I1101 10:45:28.985810  470791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem
	I1101 10:45:28.985837  470791 main.go:143] libmachine: Decoding PEM data...
	I1101 10:45:28.985855  470791 main.go:143] libmachine: Parsing certificate...
	I1101 10:45:28.986422  470791 cli_runner.go:164] Run: docker network inspect old-k8s-version-245622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:45:29.005389  470791 cli_runner.go:211] docker network inspect old-k8s-version-245622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:45:29.005498  470791 network_create.go:284] running [docker network inspect old-k8s-version-245622] to gather additional debugging logs...
	I1101 10:45:29.005522  470791 cli_runner.go:164] Run: docker network inspect old-k8s-version-245622
	W1101 10:45:29.035910  470791 cli_runner.go:211] docker network inspect old-k8s-version-245622 returned with exit code 1
	I1101 10:45:29.035944  470791 network_create.go:287] error running [docker network inspect old-k8s-version-245622]: docker network inspect old-k8s-version-245622: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-245622 not found
	I1101 10:45:29.035960  470791 network_create.go:289] output of [docker network inspect old-k8s-version-245622]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-245622 not found
	
	** /stderr **
	I1101 10:45:29.036067  470791 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:45:29.054131  470791 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5e2665991a3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:25:1a:f9:12:ec} reservation:<nil>}
	I1101 10:45:29.054498  470791 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-adecbbb769f0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:b0:b5:2e:4c:30} reservation:<nil>}
	I1101 10:45:29.054736  470791 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2077d26d1806 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:49:68:b6:9e:fb} reservation:<nil>}
	I1101 10:45:29.055011  470791 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-956e09f456d4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:6c:72:91:86:e0} reservation:<nil>}
	I1101 10:45:29.055436  470791 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a481f0}
	I1101 10:45:29.055466  470791 network_create.go:124] attempt to create docker network old-k8s-version-245622 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 10:45:29.055526  470791 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-245622 old-k8s-version-245622
	I1101 10:45:29.133367  470791 network_create.go:108] docker network old-k8s-version-245622 192.168.85.0/24 created
	I1101 10:45:29.133402  470791 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-245622" container
	I1101 10:45:29.133511  470791 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:45:29.150508  470791 cli_runner.go:164] Run: docker volume create old-k8s-version-245622 --label name.minikube.sigs.k8s.io=old-k8s-version-245622 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:45:29.169911  470791 oci.go:103] Successfully created a docker volume old-k8s-version-245622
	I1101 10:45:29.170007  470791 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-245622-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-245622 --entrypoint /usr/bin/test -v old-k8s-version-245622:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:45:29.848838  470791 oci.go:107] Successfully prepared a docker volume old-k8s-version-245622
	I1101 10:45:29.848889  470791 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:45:29.848909  470791 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:45:29.849080  470791 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-245622:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 10:45:34.927118  470791 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-245622:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.07799324s)
	I1101 10:45:34.927152  470791 kic.go:203] duration metric: took 5.078240472s to extract preloaded images to volume ...
	W1101 10:45:34.927315  470791 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:45:34.927424  470791 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:45:34.984407  470791 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-245622 --name old-k8s-version-245622 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-245622 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-245622 --network old-k8s-version-245622 --ip 192.168.85.2 --volume old-k8s-version-245622:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:45:35.351259  470791 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Running}}
	I1101 10:45:35.374651  470791 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:45:35.402972  470791 cli_runner.go:164] Run: docker exec old-k8s-version-245622 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:45:35.455278  470791 oci.go:144] the created container "old-k8s-version-245622" has a running status.
	I1101 10:45:35.455306  470791 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa...
	I1101 10:45:36.450288  470791 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:45:36.469718  470791 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:45:36.486370  470791 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:45:36.486395  470791 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-245622 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:45:36.530465  470791 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:45:36.559941  470791 machine.go:94] provisionDockerMachine start ...
	I1101 10:45:36.560049  470791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:45:36.579743  470791 main.go:143] libmachine: Using SSH client type: native
	I1101 10:45:36.580115  470791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1101 10:45:36.580126  470791 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:45:36.581129  470791 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:45:39.736746  470791 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-245622
	
	I1101 10:45:39.736771  470791 ubuntu.go:182] provisioning hostname "old-k8s-version-245622"
	I1101 10:45:39.736838  470791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:45:39.761789  470791 main.go:143] libmachine: Using SSH client type: native
	I1101 10:45:39.762102  470791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1101 10:45:39.762123  470791 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-245622 && echo "old-k8s-version-245622" | sudo tee /etc/hostname
	I1101 10:45:39.934812  470791 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-245622
	
	I1101 10:45:39.934897  470791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:45:39.955990  470791 main.go:143] libmachine: Using SSH client type: native
	I1101 10:45:39.956300  470791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1101 10:45:39.956317  470791 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-245622' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-245622/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-245622' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:45:40.137703  470791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:45:40.137780  470791 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:45:40.137819  470791 ubuntu.go:190] setting up certificates
	I1101 10:45:40.137859  470791 provision.go:84] configureAuth start
	I1101 10:45:40.137964  470791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-245622
	I1101 10:45:40.155855  470791 provision.go:143] copyHostCerts
	I1101 10:45:40.155929  470791 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:45:40.155943  470791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:45:40.156034  470791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:45:40.156129  470791 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:45:40.156134  470791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:45:40.156159  470791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:45:40.156209  470791 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:45:40.156214  470791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:45:40.156236  470791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:45:40.156281  470791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-245622 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-245622]
	I1101 10:45:40.552132  470791 provision.go:177] copyRemoteCerts
	I1101 10:45:40.552274  470791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:45:40.552338  470791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:45:40.586293  470791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:45:40.697046  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:45:40.718537  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:45:40.740222  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:45:40.760176  470791 provision.go:87] duration metric: took 622.275401ms to configureAuth
	I1101 10:45:40.760206  470791 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:45:40.760397  470791 config.go:182] Loaded profile config "old-k8s-version-245622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:45:40.760511  470791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:45:40.778565  470791 main.go:143] libmachine: Using SSH client type: native
	I1101 10:45:40.778872  470791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1101 10:45:40.778893  470791 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:45:41.094860  470791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:45:41.094887  470791 machine.go:97] duration metric: took 4.534922801s to provisionDockerMachine
	I1101 10:45:41.094897  470791 client.go:176] duration metric: took 12.109345907s to LocalClient.Create
	I1101 10:45:41.094910  470791 start.go:167] duration metric: took 12.109421576s to libmachine.API.Create "old-k8s-version-245622"
	I1101 10:45:41.094917  470791 start.go:293] postStartSetup for "old-k8s-version-245622" (driver="docker")
	I1101 10:45:41.094927  470791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:45:41.095009  470791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:45:41.095052  470791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:45:41.120916  470791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:45:41.229400  470791 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:45:41.232794  470791 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:45:41.232831  470791 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:45:41.232845  470791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:45:41.232902  470791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:45:41.233022  470791 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:45:41.233240  470791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:45:41.241577  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:45:41.274135  470791 start.go:296] duration metric: took 179.202101ms for postStartSetup
	I1101 10:45:41.274497  470791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-245622
	I1101 10:45:41.294921  470791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/config.json ...
	I1101 10:45:41.295562  470791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:45:41.295618  470791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:45:41.324510  470791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:45:41.430664  470791 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:45:41.436278  470791 start.go:128] duration metric: took 12.455185985s to createHost
	I1101 10:45:41.436300  470791 start.go:83] releasing machines lock for "old-k8s-version-245622", held for 12.455338897s
	I1101 10:45:41.436383  470791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-245622
	I1101 10:45:41.455320  470791 ssh_runner.go:195] Run: cat /version.json
	I1101 10:45:41.455362  470791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:45:41.455377  470791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:45:41.455428  470791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:45:41.479899  470791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:45:41.483265  470791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:45:41.685292  470791 ssh_runner.go:195] Run: systemctl --version
	I1101 10:45:41.692547  470791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:45:41.734087  470791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:45:41.738426  470791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:45:41.738507  470791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:45:41.769370  470791 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:45:41.769435  470791 start.go:496] detecting cgroup driver to use...
	I1101 10:45:41.769474  470791 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:45:41.769530  470791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:45:41.788167  470791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:45:41.803068  470791 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:45:41.803132  470791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:45:41.821544  470791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:45:41.841029  470791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:45:41.962796  470791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:45:42.127826  470791 docker.go:234] disabling docker service ...
	I1101 10:45:42.127918  470791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:45:42.163669  470791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:45:42.188050  470791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:45:42.348211  470791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:45:42.477903  470791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:45:42.491478  470791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:45:42.506078  470791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:45:42.506152  470791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:45:42.515438  470791 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:45:42.515553  470791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:45:42.524897  470791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:45:42.540206  470791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:45:42.552043  470791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:45:42.560788  470791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:45:42.570283  470791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:45:42.585068  470791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:45:42.595028  470791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:45:42.603228  470791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:45:42.611171  470791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:45:42.721556  470791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:45:42.848319  470791 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:45:42.848434  470791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:45:42.852606  470791 start.go:564] Will wait 60s for crictl version
	I1101 10:45:42.852723  470791 ssh_runner.go:195] Run: which crictl
	I1101 10:45:42.856541  470791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:45:42.883315  470791 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:45:42.883472  470791 ssh_runner.go:195] Run: crio --version
	I1101 10:45:42.911462  470791 ssh_runner.go:195] Run: crio --version
	I1101 10:45:42.955620  470791 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 10:45:42.958713  470791 cli_runner.go:164] Run: docker network inspect old-k8s-version-245622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:45:42.976649  470791 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:45:42.980762  470791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:45:42.992489  470791 kubeadm.go:884] updating cluster {Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:45:42.992625  470791 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:45:42.992689  470791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:45:43.027547  470791 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:45:43.027574  470791 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:45:43.027638  470791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:45:43.055273  470791 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:45:43.055313  470791 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:45:43.055322  470791 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1101 10:45:43.055407  470791 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-245622 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:45:43.055493  470791 ssh_runner.go:195] Run: crio config
	I1101 10:45:43.131788  470791 cni.go:84] Creating CNI manager for ""
	I1101 10:45:43.131821  470791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:45:43.131845  470791 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:45:43.131868  470791 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-245622 NodeName:old-k8s-version-245622 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:45:43.132023  470791 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-245622"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:45:43.132105  470791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 10:45:43.140691  470791 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:45:43.140778  470791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:45:43.149457  470791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:45:43.164580  470791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:45:43.179497  470791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1101 10:45:43.193115  470791 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:45:43.196885  470791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:45:43.207302  470791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:45:43.338800  470791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:45:43.356767  470791 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622 for IP: 192.168.85.2
	I1101 10:45:43.356788  470791 certs.go:195] generating shared ca certs ...
	I1101 10:45:43.356804  470791 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:45:43.356968  470791 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:45:43.357022  470791 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:45:43.357032  470791 certs.go:257] generating profile certs ...
	I1101 10:45:43.357085  470791 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.key
	I1101 10:45:43.357099  470791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt with IP's: []
	I1101 10:45:43.711941  470791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt ...
	I1101 10:45:43.711975  470791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: {Name:mkd0a831297637d778d40303859e632904b82a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:45:43.712177  470791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.key ...
	I1101 10:45:43.712193  470791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.key: {Name:mk0cba1d5adbb7fe27f90ad923023347b7f1f5b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:45:43.712294  470791 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.key.6a807d81
	I1101 10:45:43.712315  470791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.crt.6a807d81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:45:43.890470  470791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.crt.6a807d81 ...
	I1101 10:45:43.890499  470791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.crt.6a807d81: {Name:mk6f95cad9c93df3968aa155dbe216ba1cb10465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:45:43.890682  470791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.key.6a807d81 ...
	I1101 10:45:43.890702  470791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.key.6a807d81: {Name:mk5f84d72df6c037e6a1b2339a3c2f0ef06657e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:45:43.890788  470791 certs.go:382] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.crt.6a807d81 -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.crt
	I1101 10:45:43.890866  470791 certs.go:386] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.key.6a807d81 -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.key
	I1101 10:45:43.890924  470791 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.key
	I1101 10:45:43.890942  470791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.crt with IP's: []
	I1101 10:45:44.179318  470791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.crt ...
	I1101 10:45:44.179353  470791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.crt: {Name:mkd201cd3f3a8c5d762c0df9e8360ffcbcaa56ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:45:44.179550  470791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.key ...
	I1101 10:45:44.179565  470791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.key: {Name:mk8fd8b837a31fff072235abd12086000f09876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:45:44.179778  470791 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:45:44.179819  470791 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:45:44.179832  470791 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:45:44.179858  470791 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:45:44.179885  470791 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:45:44.179914  470791 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:45:44.179960  470791 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:45:44.180539  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:45:44.198539  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:45:44.216377  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:45:44.233997  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:45:44.253907  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:45:44.272509  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:45:44.289383  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:45:44.307649  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:45:44.324801  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:45:44.342890  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:45:44.360602  470791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:45:44.379497  470791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:45:44.393474  470791 ssh_runner.go:195] Run: openssl version
	I1101 10:45:44.399698  470791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:45:44.408353  470791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:45:44.412084  470791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:45:44.412180  470791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:45:44.453648  470791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:45:44.462396  470791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:45:44.470746  470791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:45:44.475096  470791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:45:44.475178  470791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:45:44.519711  470791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:45:44.528114  470791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:45:44.544016  470791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:45:44.548151  470791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:45:44.548260  470791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:45:44.589470  470791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:45:44.597682  470791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:45:44.601162  470791 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:45:44.601217  470791 kubeadm.go:401] StartCluster: {Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:45:44.601302  470791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:45:44.601358  470791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:45:44.628168  470791 cri.go:89] found id: ""
	I1101 10:45:44.628246  470791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:45:44.636084  470791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:45:44.643879  470791 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:45:44.643948  470791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:45:44.651721  470791 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:45:44.651760  470791 kubeadm.go:158] found existing configuration files:
	
	I1101 10:45:44.651814  470791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:45:44.659844  470791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:45:44.659909  470791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:45:44.667228  470791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:45:44.674931  470791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:45:44.675032  470791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:45:44.683745  470791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:45:44.691678  470791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:45:44.691749  470791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:45:44.699712  470791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:45:44.707689  470791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:45:44.707802  470791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:45:44.715458  470791 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:45:44.765579  470791 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1101 10:45:44.765667  470791 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:45:44.808695  470791 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:45:44.808776  470791 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:45:44.808825  470791 kubeadm.go:319] OS: Linux
	I1101 10:45:44.808888  470791 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:45:44.808982  470791 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:45:44.809046  470791 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:45:44.809119  470791 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:45:44.809178  470791 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:45:44.809235  470791 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:45:44.809287  470791 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:45:44.809355  470791 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:45:44.809408  470791 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:45:44.899740  470791 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:45:44.899860  470791 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:45:44.899964  470791 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 10:45:45.185369  470791 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:45:45.199767  470791 out.go:252]   - Generating certificates and keys ...
	I1101 10:45:45.199893  470791 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:45:45.199980  470791 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:45:45.494586  470791 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:45:45.817329  470791 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:45:46.296546  470791 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:45:46.985973  470791 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:45:47.390387  470791 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:45:47.390712  470791 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-245622] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:45:47.773880  470791 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:45:47.774252  470791 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-245622] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:45:48.057308  470791 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:45:49.016311  470791 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:45:49.260009  470791 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:45:49.260277  470791 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:45:49.902073  470791 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:45:50.104215  470791 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:45:50.252499  470791 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:45:50.608354  470791 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:45:50.609015  470791 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:45:50.611735  470791 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:45:50.615244  470791 out.go:252]   - Booting up control plane ...
	I1101 10:45:50.615353  470791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:45:50.615441  470791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:45:50.615511  470791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:45:50.635210  470791 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:45:50.635314  470791 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:45:50.635356  470791 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:45:50.790189  470791 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 10:45:58.296757  470791 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.506653 seconds
	I1101 10:45:58.296884  470791 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:45:58.316062  470791 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:45:58.848218  470791 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:45:58.848430  470791 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-245622 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:45:59.361764  470791 kubeadm.go:319] [bootstrap-token] Using token: 5jknh4.ex399yj1r1ovph11
	I1101 10:45:59.364661  470791 out.go:252]   - Configuring RBAC rules ...
	I1101 10:45:59.364793  470791 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:45:59.369966  470791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:45:59.384304  470791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:45:59.388793  470791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:45:59.393213  470791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:45:59.397411  470791 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:45:59.413080  470791 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:45:59.695471  470791 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:45:59.786021  470791 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:45:59.787336  470791 kubeadm.go:319] 
	I1101 10:45:59.787421  470791 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:45:59.787433  470791 kubeadm.go:319] 
	I1101 10:45:59.787513  470791 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:45:59.787522  470791 kubeadm.go:319] 
	I1101 10:45:59.787548  470791 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:45:59.787613  470791 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:45:59.787669  470791 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:45:59.787677  470791 kubeadm.go:319] 
	I1101 10:45:59.787752  470791 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:45:59.787765  470791 kubeadm.go:319] 
	I1101 10:45:59.787816  470791 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:45:59.787824  470791 kubeadm.go:319] 
	I1101 10:45:59.787878  470791 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:45:59.787960  470791 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:45:59.788038  470791 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:45:59.788048  470791 kubeadm.go:319] 
	I1101 10:45:59.788136  470791 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:45:59.788220  470791 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:45:59.788228  470791 kubeadm.go:319] 
	I1101 10:45:59.788316  470791 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5jknh4.ex399yj1r1ovph11 \
	I1101 10:45:59.788428  470791 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 \
	I1101 10:45:59.788454  470791 kubeadm.go:319] 	--control-plane 
	I1101 10:45:59.788464  470791 kubeadm.go:319] 
	I1101 10:45:59.788553  470791 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:45:59.788561  470791 kubeadm.go:319] 
	I1101 10:45:59.788647  470791 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5jknh4.ex399yj1r1ovph11 \
	I1101 10:45:59.788758  470791 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 
	I1101 10:45:59.793054  470791 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:45:59.793187  470791 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:45:59.793338  470791 cni.go:84] Creating CNI manager for ""
	I1101 10:45:59.793354  470791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:45:59.798600  470791 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:45:59.801469  470791 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:45:59.806125  470791 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1101 10:45:59.806149  470791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:45:59.826047  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:46:01.105458  470791 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.279370812s)
	I1101 10:46:01.105554  470791 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:46:01.105748  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:01.105886  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-245622 minikube.k8s.io/updated_at=2025_11_01T10_46_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=old-k8s-version-245622 minikube.k8s.io/primary=true
	I1101 10:46:01.285904  470791 ops.go:34] apiserver oom_adj: -16
	I1101 10:46:01.286017  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:01.786295  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:02.286113  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:02.786337  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:03.286545  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:03.787002  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:04.286172  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:04.786585  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:05.287020  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:05.786683  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:06.286805  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:06.786560  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:07.287123  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:07.786186  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:08.286309  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:08.786898  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:09.286596  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:09.786757  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:10.286426  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:10.786214  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:11.286718  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:11.786182  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:12.286122  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:12.786167  470791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:46:12.895765  470791 kubeadm.go:1114] duration metric: took 11.790074376s to wait for elevateKubeSystemPrivileges
	I1101 10:46:12.895804  470791 kubeadm.go:403] duration metric: took 28.294587736s to StartCluster
	I1101 10:46:12.895825  470791 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:46:12.895888  470791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:46:12.896854  470791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:46:12.897121  470791 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:46:12.897118  470791 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:46:12.897395  470791 config.go:182] Loaded profile config "old-k8s-version-245622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:46:12.897442  470791 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:46:12.897507  470791 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-245622"
	I1101 10:46:12.897522  470791 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-245622"
	I1101 10:46:12.897549  470791 host.go:66] Checking if "old-k8s-version-245622" exists ...
	I1101 10:46:12.898005  470791 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:46:12.898509  470791 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-245622"
	I1101 10:46:12.898535  470791 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-245622"
	I1101 10:46:12.898803  470791 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:46:12.901106  470791 out.go:179] * Verifying Kubernetes components...
	I1101 10:46:12.906130  470791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:46:12.937097  470791 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:46:12.939916  470791 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-245622"
	I1101 10:46:12.939961  470791 host.go:66] Checking if "old-k8s-version-245622" exists ...
	I1101 10:46:12.940217  470791 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:46:12.940234  470791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:46:12.940287  470791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:12.940730  470791 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:46:12.975613  470791 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:46:12.975636  470791 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:46:12.975698  470791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:12.978718  470791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:46:13.006861  470791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:46:13.175520  470791 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:46:13.180700  470791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:46:13.239427  470791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:46:13.286163  470791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:46:13.710806  470791 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:46:13.712738  470791 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-245622" to be "Ready" ...
	I1101 10:46:14.007184  470791 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:46:14.010175  470791 addons.go:515] duration metric: took 1.112702516s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:46:14.215437  470791 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-245622" context rescaled to 1 replicas
	W1101 10:46:15.716788  470791 node_ready.go:57] node "old-k8s-version-245622" has "Ready":"False" status (will retry)
	W1101 10:46:18.216775  470791 node_ready.go:57] node "old-k8s-version-245622" has "Ready":"False" status (will retry)
	W1101 10:46:20.716545  470791 node_ready.go:57] node "old-k8s-version-245622" has "Ready":"False" status (will retry)
	W1101 10:46:23.216314  470791 node_ready.go:57] node "old-k8s-version-245622" has "Ready":"False" status (will retry)
	W1101 10:46:25.716199  470791 node_ready.go:57] node "old-k8s-version-245622" has "Ready":"False" status (will retry)
	I1101 10:46:27.717359  470791 node_ready.go:49] node "old-k8s-version-245622" is "Ready"
	I1101 10:46:27.717384  470791 node_ready.go:38] duration metric: took 14.004627401s for node "old-k8s-version-245622" to be "Ready" ...
	I1101 10:46:27.717399  470791 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:46:27.717457  470791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:46:27.732979  470791 api_server.go:72] duration metric: took 14.835831916s to wait for apiserver process to appear ...
	I1101 10:46:27.733002  470791 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:46:27.733020  470791 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:46:27.742325  470791 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:46:27.744509  470791 api_server.go:141] control plane version: v1.28.0
	I1101 10:46:27.744544  470791 api_server.go:131] duration metric: took 11.535412ms to wait for apiserver health ...
	I1101 10:46:27.744554  470791 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:46:27.748507  470791 system_pods.go:59] 8 kube-system pods found
	I1101 10:46:27.748587  470791 system_pods.go:61] "coredns-5dd5756b68-nd9sf" [76f49986-bf1b-48c2-bb9f-5f1b915e6e21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:46:27.748613  470791 system_pods.go:61] "etcd-old-k8s-version-245622" [1b2e4029-b16e-4cf3-83e1-522a86cca55a] Running
	I1101 10:46:27.748654  470791 system_pods.go:61] "kindnet-sp8fr" [8f85928a-8197-42d1-99ff-3e8aacda2af7] Running
	I1101 10:46:27.748678  470791 system_pods.go:61] "kube-apiserver-old-k8s-version-245622" [d737bb9d-f43a-4223-8278-d59ffcf24352] Running
	I1101 10:46:27.748698  470791 system_pods.go:61] "kube-controller-manager-old-k8s-version-245622" [ae4aeb88-5b07-4cbf-a840-9cfcd5558ea6] Running
	I1101 10:46:27.748719  470791 system_pods.go:61] "kube-proxy-pkwrv" [f11eb6ad-8629-41f3-bf76-3ce65cfff91d] Running
	I1101 10:46:27.748738  470791 system_pods.go:61] "kube-scheduler-old-k8s-version-245622" [03c58ed7-ece8-4f7d-95ae-8b961f82f82b] Running
	I1101 10:46:27.748771  470791 system_pods.go:61] "storage-provisioner" [4656f817-ef7d-49e6-847a-8bb2f430bf1c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:46:27.748798  470791 system_pods.go:74] duration metric: took 4.237505ms to wait for pod list to return data ...
	I1101 10:46:27.748820  470791 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:46:27.752076  470791 default_sa.go:45] found service account: "default"
	I1101 10:46:27.752106  470791 default_sa.go:55] duration metric: took 3.264392ms for default service account to be created ...
	I1101 10:46:27.752116  470791 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:46:27.767649  470791 system_pods.go:86] 8 kube-system pods found
	I1101 10:46:27.767751  470791 system_pods.go:89] "coredns-5dd5756b68-nd9sf" [76f49986-bf1b-48c2-bb9f-5f1b915e6e21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:46:27.767781  470791 system_pods.go:89] "etcd-old-k8s-version-245622" [1b2e4029-b16e-4cf3-83e1-522a86cca55a] Running
	I1101 10:46:27.767802  470791 system_pods.go:89] "kindnet-sp8fr" [8f85928a-8197-42d1-99ff-3e8aacda2af7] Running
	I1101 10:46:27.767839  470791 system_pods.go:89] "kube-apiserver-old-k8s-version-245622" [d737bb9d-f43a-4223-8278-d59ffcf24352] Running
	I1101 10:46:27.767864  470791 system_pods.go:89] "kube-controller-manager-old-k8s-version-245622" [ae4aeb88-5b07-4cbf-a840-9cfcd5558ea6] Running
	I1101 10:46:27.767897  470791 system_pods.go:89] "kube-proxy-pkwrv" [f11eb6ad-8629-41f3-bf76-3ce65cfff91d] Running
	I1101 10:46:27.767920  470791 system_pods.go:89] "kube-scheduler-old-k8s-version-245622" [03c58ed7-ece8-4f7d-95ae-8b961f82f82b] Running
	I1101 10:46:27.767942  470791 system_pods.go:89] "storage-provisioner" [4656f817-ef7d-49e6-847a-8bb2f430bf1c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:46:27.767993  470791 retry.go:31] will retry after 209.524503ms: missing components: kube-dns
	I1101 10:46:27.984849  470791 system_pods.go:86] 8 kube-system pods found
	I1101 10:46:27.984885  470791 system_pods.go:89] "coredns-5dd5756b68-nd9sf" [76f49986-bf1b-48c2-bb9f-5f1b915e6e21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:46:27.984892  470791 system_pods.go:89] "etcd-old-k8s-version-245622" [1b2e4029-b16e-4cf3-83e1-522a86cca55a] Running
	I1101 10:46:27.984899  470791 system_pods.go:89] "kindnet-sp8fr" [8f85928a-8197-42d1-99ff-3e8aacda2af7] Running
	I1101 10:46:27.984903  470791 system_pods.go:89] "kube-apiserver-old-k8s-version-245622" [d737bb9d-f43a-4223-8278-d59ffcf24352] Running
	I1101 10:46:27.984909  470791 system_pods.go:89] "kube-controller-manager-old-k8s-version-245622" [ae4aeb88-5b07-4cbf-a840-9cfcd5558ea6] Running
	I1101 10:46:27.984913  470791 system_pods.go:89] "kube-proxy-pkwrv" [f11eb6ad-8629-41f3-bf76-3ce65cfff91d] Running
	I1101 10:46:27.984917  470791 system_pods.go:89] "kube-scheduler-old-k8s-version-245622" [03c58ed7-ece8-4f7d-95ae-8b961f82f82b] Running
	I1101 10:46:27.984939  470791 system_pods.go:89] "storage-provisioner" [4656f817-ef7d-49e6-847a-8bb2f430bf1c] Running
	I1101 10:46:27.984955  470791 retry.go:31] will retry after 343.917785ms: missing components: kube-dns
	I1101 10:46:28.335439  470791 system_pods.go:86] 8 kube-system pods found
	I1101 10:46:28.335529  470791 system_pods.go:89] "coredns-5dd5756b68-nd9sf" [76f49986-bf1b-48c2-bb9f-5f1b915e6e21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:46:28.335552  470791 system_pods.go:89] "etcd-old-k8s-version-245622" [1b2e4029-b16e-4cf3-83e1-522a86cca55a] Running
	I1101 10:46:28.335591  470791 system_pods.go:89] "kindnet-sp8fr" [8f85928a-8197-42d1-99ff-3e8aacda2af7] Running
	I1101 10:46:28.335615  470791 system_pods.go:89] "kube-apiserver-old-k8s-version-245622" [d737bb9d-f43a-4223-8278-d59ffcf24352] Running
	I1101 10:46:28.335637  470791 system_pods.go:89] "kube-controller-manager-old-k8s-version-245622" [ae4aeb88-5b07-4cbf-a840-9cfcd5558ea6] Running
	I1101 10:46:28.335675  470791 system_pods.go:89] "kube-proxy-pkwrv" [f11eb6ad-8629-41f3-bf76-3ce65cfff91d] Running
	I1101 10:46:28.335701  470791 system_pods.go:89] "kube-scheduler-old-k8s-version-245622" [03c58ed7-ece8-4f7d-95ae-8b961f82f82b] Running
	I1101 10:46:28.335730  470791 system_pods.go:89] "storage-provisioner" [4656f817-ef7d-49e6-847a-8bb2f430bf1c] Running
	I1101 10:46:28.335766  470791 system_pods.go:126] duration metric: took 583.643133ms to wait for k8s-apps to be running ...
	I1101 10:46:28.335801  470791 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:46:28.335888  470791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:46:28.349992  470791 system_svc.go:56] duration metric: took 14.18201ms WaitForService to wait for kubelet
	I1101 10:46:28.350018  470791 kubeadm.go:587] duration metric: took 15.452876015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:46:28.350037  470791 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:46:28.352674  470791 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:46:28.352705  470791 node_conditions.go:123] node cpu capacity is 2
	I1101 10:46:28.352719  470791 node_conditions.go:105] duration metric: took 2.675365ms to run NodePressure ...
	I1101 10:46:28.352730  470791 start.go:242] waiting for startup goroutines ...
	I1101 10:46:28.352738  470791 start.go:247] waiting for cluster config update ...
	I1101 10:46:28.352748  470791 start.go:256] writing updated cluster config ...
	I1101 10:46:28.353067  470791 ssh_runner.go:195] Run: rm -f paused
	I1101 10:46:28.356762  470791 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:46:28.361135  470791 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-nd9sf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:29.368843  470791 pod_ready.go:94] pod "coredns-5dd5756b68-nd9sf" is "Ready"
	I1101 10:46:29.368869  470791 pod_ready.go:86] duration metric: took 1.007705692s for pod "coredns-5dd5756b68-nd9sf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:29.372045  470791 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:29.378262  470791 pod_ready.go:94] pod "etcd-old-k8s-version-245622" is "Ready"
	I1101 10:46:29.378331  470791 pod_ready.go:86] duration metric: took 6.258824ms for pod "etcd-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:29.381575  470791 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:29.387190  470791 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-245622" is "Ready"
	I1101 10:46:29.387218  470791 pod_ready.go:86] duration metric: took 5.616603ms for pod "kube-apiserver-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:29.390801  470791 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:29.565690  470791 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-245622" is "Ready"
	I1101 10:46:29.565718  470791 pod_ready.go:86] duration metric: took 174.889888ms for pod "kube-controller-manager-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:29.766548  470791 pod_ready.go:83] waiting for pod "kube-proxy-pkwrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:30.166699  470791 pod_ready.go:94] pod "kube-proxy-pkwrv" is "Ready"
	I1101 10:46:30.166730  470791 pod_ready.go:86] duration metric: took 400.154533ms for pod "kube-proxy-pkwrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:30.366578  470791 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:30.765854  470791 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-245622" is "Ready"
	I1101 10:46:30.765882  470791 pod_ready.go:86] duration metric: took 399.276928ms for pod "kube-scheduler-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:46:30.765894  470791 pod_ready.go:40] duration metric: took 2.409100737s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:46:30.821294  470791 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1101 10:46:30.824352  470791 out.go:203] 
	W1101 10:46:30.827335  470791 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:46:30.830449  470791 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:46:30.834365  470791 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-245622" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:46:27 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:27.852324504Z" level=info msg="Created container 64070adfc054f1326425837dbea169c94650795060c84576ef22535908752610: kube-system/coredns-5dd5756b68-nd9sf/coredns" id=ce45814c-be5e-4af8-bea3-63040ca16335 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:46:27 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:27.853306905Z" level=info msg="Starting container: 64070adfc054f1326425837dbea169c94650795060c84576ef22535908752610" id=210005bd-d430-4db2-ae25-1378e47dccaa name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:46:27 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:27.855024238Z" level=info msg="Started container" PID=1929 containerID=64070adfc054f1326425837dbea169c94650795060c84576ef22535908752610 description=kube-system/coredns-5dd5756b68-nd9sf/coredns id=210005bd-d430-4db2-ae25-1378e47dccaa name=/runtime.v1.RuntimeService/StartContainer sandboxID=50a1d94a036722fa0510d67ddac064bb95c96a3f6abaa80966af03b1e45682ab
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.349631003Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b88087bf-45f7-4c17-a65f-6d92b9ad9bab name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.349706852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.355144105Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7324c3834f2d60d91d6666c5d307e03cb8424a55fdd708b2298f9d423b467a4f UID:752fc038-610b-4c69-a258-06116d49c5d3 NetNS:/var/run/netns/88272df8-6dc5-46f5-86d0-f649ccaff7a6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d808}] Aliases:map[]}"
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.355181825Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.377225669Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7324c3834f2d60d91d6666c5d307e03cb8424a55fdd708b2298f9d423b467a4f UID:752fc038-610b-4c69-a258-06116d49c5d3 NetNS:/var/run/netns/88272df8-6dc5-46f5-86d0-f649ccaff7a6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d808}] Aliases:map[]}"
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.377387131Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.386821378Z" level=info msg="Ran pod sandbox 7324c3834f2d60d91d6666c5d307e03cb8424a55fdd708b2298f9d423b467a4f with infra container: default/busybox/POD" id=b88087bf-45f7-4c17-a65f-6d92b9ad9bab name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.390717218Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=77910cad-b483-46ad-bd76-36e015cbd8ac name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.391096422Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=77910cad-b483-46ad-bd76-36e015cbd8ac name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.391229477Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=77910cad-b483-46ad-bd76-36e015cbd8ac name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.393269718Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0262c98-e3ea-4d9f-a7c9-1b29cc229fdd name=/runtime.v1.ImageService/PullImage
	Nov 01 10:46:31 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:31.39565603Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:46:33 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:33.51652888Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=f0262c98-e3ea-4d9f-a7c9-1b29cc229fdd name=/runtime.v1.ImageService/PullImage
	Nov 01 10:46:33 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:33.520016552Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d1fb52d2-a225-4763-b1c4-fa2a90517393 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:46:33 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:33.522155715Z" level=info msg="Creating container: default/busybox/busybox" id=f8f6efc6-08c4-4278-a7c7-a6d5e73f62f9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:46:33 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:33.522259888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:46:33 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:33.526981221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:46:33 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:33.527625248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:46:33 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:33.548408986Z" level=info msg="Created container 8d94c934986bbfaf5f03614262448fd8c7c31da8eba4324a35e98ad21cfca141: default/busybox/busybox" id=f8f6efc6-08c4-4278-a7c7-a6d5e73f62f9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:46:33 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:33.550605782Z" level=info msg="Starting container: 8d94c934986bbfaf5f03614262448fd8c7c31da8eba4324a35e98ad21cfca141" id=5bc44583-ba89-49eb-a702-34663380cb53 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:46:33 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:33.552517455Z" level=info msg="Started container" PID=1982 containerID=8d94c934986bbfaf5f03614262448fd8c7c31da8eba4324a35e98ad21cfca141 description=default/busybox/busybox id=5bc44583-ba89-49eb-a702-34663380cb53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7324c3834f2d60d91d6666c5d307e03cb8424a55fdd708b2298f9d423b467a4f
	Nov 01 10:46:40 old-k8s-version-245622 crio[841]: time="2025-11-01T10:46:40.251525881Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	8d94c934986bb       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   7324c3834f2d6       busybox                                          default
	64070adfc054f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   50a1d94a03672       coredns-5dd5756b68-nd9sf                         kube-system
	88d7863cca27a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   f22c465b83d81       storage-provisioner                              kube-system
	4a4eeced7b178       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   b343d5d967024       kindnet-sp8fr                                    kube-system
	92bae262ef923       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   06c1d573933ba       kube-proxy-pkwrv                                 kube-system
	1ca93b5525623       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   2e2e21f5bdbd3       etcd-old-k8s-version-245622                      kube-system
	bd860a5615a1a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   c486fcb116174       kube-apiserver-old-k8s-version-245622            kube-system
	63e29184c625a       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   12e0976047de8       kube-controller-manager-old-k8s-version-245622   kube-system
	2cc3d461c6317       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   154ac127d212b       kube-scheduler-old-k8s-version-245622            kube-system
	
	
	==> coredns [64070adfc054f1326425837dbea169c94650795060c84576ef22535908752610] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54527 - 39269 "HINFO IN 8486094633888786456.8024739668748444650. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.057408593s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-245622
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-245622
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=old-k8s-version-245622
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_46_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:45:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-245622
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:46:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:46:30 +0000   Sat, 01 Nov 2025 10:45:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:46:30 +0000   Sat, 01 Nov 2025 10:45:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:46:30 +0000   Sat, 01 Nov 2025 10:45:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:46:30 +0000   Sat, 01 Nov 2025 10:46:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-245622
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d68081b6-bca0-4e35-910f-cc1a79899cef
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-nd9sf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-245622                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-sp8fr                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-245622             250m (12%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-245622    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-pkwrv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-245622             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  49s (x9 over 49s)  kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-245622 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x7 over 49s)  kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-245622 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-245622 event: Registered Node old-k8s-version-245622 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-245622 status is now: NodeReady
	
	
	==> dmesg <==
	[ +28.523616] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[ +37.261841] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1ca93b5525623158ff678361f3e2f620b0d5602ceeee4c06dbc9d83aac0aa535] <==
	{"level":"info","ts":"2025-11-01T10:45:52.984203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-01T10:45:52.984372Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-01T10:45:52.984703Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T10:45:52.984831Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T10:45:52.984998Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T10:45:52.985687Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T10:45:52.985758Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:45:53.748965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-01T10:45:53.749088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-01T10:45:53.749138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-01T10:45:53.749187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:45:53.749219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T10:45:53.74927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-01T10:45:53.749302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T10:45:53.753057Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:45:53.757169Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-245622 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:45:53.757255Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:45:53.758135Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:45:53.759032Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:45:53.760985Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:45:53.75866Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:45:53.758782Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:45:53.764949Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:45:53.765037Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T10:45:53.769056Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 10:46:41 up  2:29,  0 user,  load average: 2.72, 3.27, 2.63
	Linux old-k8s-version-245622 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a4eeced7b1782e7f52d3b3d624735944ebdd0996a8f92dfba442166f67159e9] <==
	I1101 10:46:16.835755       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:46:16.925153       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:46:16.925327       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:46:16.925340       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:46:16.925364       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:46:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:46:17.128264       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:46:17.129006       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:46:17.129032       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:46:17.129404       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:46:17.329235       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:46:17.329330       1 metrics.go:72] Registering metrics
	I1101 10:46:17.329409       1 controller.go:711] "Syncing nftables rules"
	I1101 10:46:27.132850       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:46:27.132896       1 main.go:301] handling current node
	I1101 10:46:37.129030       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:46:37.129073       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bd860a5615a1a2cd9feea45b3d1b58ad7d34cfe43e2f30770e203e271d0f8fc2] <==
	I1101 10:45:56.518268       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:45:56.518315       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:45:56.518380       1 aggregator.go:166] initial CRD sync complete...
	I1101 10:45:56.518409       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 10:45:56.518434       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:45:56.518459       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:45:56.526390       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 10:45:56.547948       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:45:56.649712       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:45:57.303180       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:45:57.307574       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:45:57.307660       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:45:57.915292       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:45:57.962376       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:45:58.089527       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:45:58.100129       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 10:45:58.101497       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 10:45:58.106723       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:45:58.882182       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:45:59.678921       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:45:59.694138       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:45:59.705207       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 10:46:12.441926       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1101 10:46:12.642511       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E1101 10:46:40.306232       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.85.2:46002->192.168.85.2:10250: write: broken pipe
	
	
	==> kube-controller-manager [63e29184c625a9760f07e76b14b1cb4b5422c25e67a5616938aa84cf04b73e32] <==
	I1101 10:46:11.862719       1 shared_informer.go:318] Caches are synced for persistent volume
	I1101 10:46:11.937371       1 shared_informer.go:318] Caches are synced for PV protection
	I1101 10:46:12.281027       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:46:12.281140       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:46:12.293065       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:46:12.456824       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-sp8fr"
	I1101 10:46:12.482490       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pkwrv"
	I1101 10:46:12.648320       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1101 10:46:12.748257       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nd9sf"
	I1101 10:46:12.761738       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-q8568"
	I1101 10:46:12.772955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.419816ms"
	I1101 10:46:12.793065       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.988711ms"
	I1101 10:46:12.821602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.418568ms"
	I1101 10:46:12.821834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.393µs"
	I1101 10:46:13.774349       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1101 10:46:13.842100       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-q8568"
	I1101 10:46:13.860914       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.260767ms"
	I1101 10:46:13.872247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.156766ms"
	I1101 10:46:13.872455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.201µs"
	I1101 10:46:27.414031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.536µs"
	I1101 10:46:27.436395       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.951µs"
	I1101 10:46:27.996742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.696µs"
	I1101 10:46:28.995926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.361388ms"
	I1101 10:46:28.996041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.205µs"
	I1101 10:46:31.732386       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [92bae262ef9238542d25ba9a4b14ebd9d443f62315383069aca4cfc4f01f9680] <==
	I1101 10:46:14.394923       1 server_others.go:69] "Using iptables proxy"
	I1101 10:46:14.409999       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1101 10:46:14.431966       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:46:14.433582       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:46:14.433664       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:46:14.433695       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:46:14.433755       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:46:14.433964       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:46:14.434148       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:46:14.434875       1 config.go:188] "Starting service config controller"
	I1101 10:46:14.435093       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:46:14.435151       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:46:14.435197       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:46:14.436683       1 config.go:315] "Starting node config controller"
	I1101 10:46:14.436702       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:46:14.535889       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 10:46:14.536005       1 shared_informer.go:318] Caches are synced for service config
	I1101 10:46:14.537648       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [2cc3d461c6317e12bcd63d34cd0bf7a79697e0c63e18050f2bb40f58b83f78a1] <==
	W1101 10:45:56.609253       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 10:45:56.609293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 10:45:56.609676       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 10:45:56.609699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 10:45:56.609757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 10:45:56.609773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 10:45:56.609849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 10:45:56.609870       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 10:45:56.609915       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 10:45:56.609929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1101 10:45:56.609968       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 10:45:56.609984       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1101 10:45:56.610001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 10:45:56.610021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 10:45:56.620623       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 10:45:56.620661       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:45:57.524303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 10:45:57.524433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 10:45:57.640977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 10:45:57.641089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 10:45:57.706735       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 10:45:57.706767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 10:45:57.724578       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 10:45:57.724696       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1101 10:45:59.586291       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:46:12 old-k8s-version-245622 kubelet[1359]: I1101 10:46:12.500302    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f85928a-8197-42d1-99ff-3e8aacda2af7-lib-modules\") pod \"kindnet-sp8fr\" (UID: \"8f85928a-8197-42d1-99ff-3e8aacda2af7\") " pod="kube-system/kindnet-sp8fr"
	Nov 01 10:46:12 old-k8s-version-245622 kubelet[1359]: I1101 10:46:12.500339    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f11eb6ad-8629-41f3-bf76-3ce65cfff91d-kube-proxy\") pod \"kube-proxy-pkwrv\" (UID: \"f11eb6ad-8629-41f3-bf76-3ce65cfff91d\") " pod="kube-system/kube-proxy-pkwrv"
	Nov 01 10:46:12 old-k8s-version-245622 kubelet[1359]: I1101 10:46:12.500369    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8f85928a-8197-42d1-99ff-3e8aacda2af7-cni-cfg\") pod \"kindnet-sp8fr\" (UID: \"8f85928a-8197-42d1-99ff-3e8aacda2af7\") " pod="kube-system/kindnet-sp8fr"
	Nov 01 10:46:12 old-k8s-version-245622 kubelet[1359]: I1101 10:46:12.500405    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cpj9\" (UniqueName: \"kubernetes.io/projected/8f85928a-8197-42d1-99ff-3e8aacda2af7-kube-api-access-2cpj9\") pod \"kindnet-sp8fr\" (UID: \"8f85928a-8197-42d1-99ff-3e8aacda2af7\") " pod="kube-system/kindnet-sp8fr"
	Nov 01 10:46:13 old-k8s-version-245622 kubelet[1359]: E1101 10:46:13.613177    1359 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:46:13 old-k8s-version-245622 kubelet[1359]: E1101 10:46:13.613233    1359 projected.go:198] Error preparing data for projected volume kube-api-access-2cpj9 for pod kube-system/kindnet-sp8fr: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:46:13 old-k8s-version-245622 kubelet[1359]: E1101 10:46:13.613321    1359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f85928a-8197-42d1-99ff-3e8aacda2af7-kube-api-access-2cpj9 podName:8f85928a-8197-42d1-99ff-3e8aacda2af7 nodeName:}" failed. No retries permitted until 2025-11-01 10:46:14.113291629 +0000 UTC m=+14.483672491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2cpj9" (UniqueName: "kubernetes.io/projected/8f85928a-8197-42d1-99ff-3e8aacda2af7-kube-api-access-2cpj9") pod "kindnet-sp8fr" (UID: "8f85928a-8197-42d1-99ff-3e8aacda2af7") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:46:13 old-k8s-version-245622 kubelet[1359]: E1101 10:46:13.615830    1359 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:46:13 old-k8s-version-245622 kubelet[1359]: E1101 10:46:13.615875    1359 projected.go:198] Error preparing data for projected volume kube-api-access-tjbnh for pod kube-system/kube-proxy-pkwrv: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:46:13 old-k8s-version-245622 kubelet[1359]: E1101 10:46:13.615938    1359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f11eb6ad-8629-41f3-bf76-3ce65cfff91d-kube-api-access-tjbnh podName:f11eb6ad-8629-41f3-bf76-3ce65cfff91d nodeName:}" failed. No retries permitted until 2025-11-01 10:46:14.115918197 +0000 UTC m=+14.486299060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tjbnh" (UniqueName: "kubernetes.io/projected/f11eb6ad-8629-41f3-bf76-3ce65cfff91d-kube-api-access-tjbnh") pod "kube-proxy-pkwrv" (UID: "f11eb6ad-8629-41f3-bf76-3ce65cfff91d") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:46:14 old-k8s-version-245622 kubelet[1359]: I1101 10:46:14.952473    1359 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pkwrv" podStartSLOduration=2.952377833 podCreationTimestamp="2025-11-01 10:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:46:14.951903293 +0000 UTC m=+15.322284164" watchObservedRunningTime="2025-11-01 10:46:14.952377833 +0000 UTC m=+15.322758729"
	Nov 01 10:46:16 old-k8s-version-245622 kubelet[1359]: I1101 10:46:16.954121    1359 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-sp8fr" podStartSLOduration=2.478502431 podCreationTimestamp="2025-11-01 10:46:12 +0000 UTC" firstStartedPulling="2025-11-01 10:46:14.277059082 +0000 UTC m=+14.647439953" lastFinishedPulling="2025-11-01 10:46:16.752634159 +0000 UTC m=+17.123015030" observedRunningTime="2025-11-01 10:46:16.953309353 +0000 UTC m=+17.323690216" watchObservedRunningTime="2025-11-01 10:46:16.954077508 +0000 UTC m=+17.324458379"
	Nov 01 10:46:27 old-k8s-version-245622 kubelet[1359]: I1101 10:46:27.369209    1359 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 10:46:27 old-k8s-version-245622 kubelet[1359]: I1101 10:46:27.413618    1359 topology_manager.go:215] "Topology Admit Handler" podUID="76f49986-bf1b-48c2-bb9f-5f1b915e6e21" podNamespace="kube-system" podName="coredns-5dd5756b68-nd9sf"
	Nov 01 10:46:27 old-k8s-version-245622 kubelet[1359]: I1101 10:46:27.416826    1359 topology_manager.go:215] "Topology Admit Handler" podUID="4656f817-ef7d-49e6-847a-8bb2f430bf1c" podNamespace="kube-system" podName="storage-provisioner"
	Nov 01 10:46:27 old-k8s-version-245622 kubelet[1359]: I1101 10:46:27.505631    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4656f817-ef7d-49e6-847a-8bb2f430bf1c-tmp\") pod \"storage-provisioner\" (UID: \"4656f817-ef7d-49e6-847a-8bb2f430bf1c\") " pod="kube-system/storage-provisioner"
	Nov 01 10:46:27 old-k8s-version-245622 kubelet[1359]: I1101 10:46:27.505699    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjtf\" (UniqueName: \"kubernetes.io/projected/4656f817-ef7d-49e6-847a-8bb2f430bf1c-kube-api-access-fxjtf\") pod \"storage-provisioner\" (UID: \"4656f817-ef7d-49e6-847a-8bb2f430bf1c\") " pod="kube-system/storage-provisioner"
	Nov 01 10:46:27 old-k8s-version-245622 kubelet[1359]: I1101 10:46:27.505732    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76f49986-bf1b-48c2-bb9f-5f1b915e6e21-config-volume\") pod \"coredns-5dd5756b68-nd9sf\" (UID: \"76f49986-bf1b-48c2-bb9f-5f1b915e6e21\") " pod="kube-system/coredns-5dd5756b68-nd9sf"
	Nov 01 10:46:27 old-k8s-version-245622 kubelet[1359]: I1101 10:46:27.505755    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq2nm\" (UniqueName: \"kubernetes.io/projected/76f49986-bf1b-48c2-bb9f-5f1b915e6e21-kube-api-access-mq2nm\") pod \"coredns-5dd5756b68-nd9sf\" (UID: \"76f49986-bf1b-48c2-bb9f-5f1b915e6e21\") " pod="kube-system/coredns-5dd5756b68-nd9sf"
	Nov 01 10:46:27 old-k8s-version-245622 kubelet[1359]: W1101 10:46:27.746079    1359 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/crio-f22c465b83d8152b77a14d5ff20b3a2a9e13737b24ac3565e2666ba7467397a4 WatchSource:0}: Error finding container f22c465b83d8152b77a14d5ff20b3a2a9e13737b24ac3565e2666ba7467397a4: Status 404 returned error can't find the container with id f22c465b83d8152b77a14d5ff20b3a2a9e13737b24ac3565e2666ba7467397a4
	Nov 01 10:46:27 old-k8s-version-245622 kubelet[1359]: W1101 10:46:27.781734    1359 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/crio-50a1d94a036722fa0510d67ddac064bb95c96a3f6abaa80966af03b1e45682ab WatchSource:0}: Error finding container 50a1d94a036722fa0510d67ddac064bb95c96a3f6abaa80966af03b1e45682ab: Status 404 returned error can't find the container with id 50a1d94a036722fa0510d67ddac064bb95c96a3f6abaa80966af03b1e45682ab
	Nov 01 10:46:27 old-k8s-version-245622 kubelet[1359]: I1101 10:46:27.981692    1359 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.981648718 podCreationTimestamp="2025-11-01 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:46:27.980290666 +0000 UTC m=+28.350671537" watchObservedRunningTime="2025-11-01 10:46:27.981648718 +0000 UTC m=+28.352029581"
	Nov 01 10:46:28 old-k8s-version-245622 kubelet[1359]: I1101 10:46:28.984012    1359 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-nd9sf" podStartSLOduration=16.983969169 podCreationTimestamp="2025-11-01 10:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:46:28.003199675 +0000 UTC m=+28.373580554" watchObservedRunningTime="2025-11-01 10:46:28.983969169 +0000 UTC m=+29.354350040"
	Nov 01 10:46:31 old-k8s-version-245622 kubelet[1359]: I1101 10:46:31.047373    1359 topology_manager.go:215] "Topology Admit Handler" podUID="752fc038-610b-4c69-a258-06116d49c5d3" podNamespace="default" podName="busybox"
	Nov 01 10:46:31 old-k8s-version-245622 kubelet[1359]: I1101 10:46:31.226907    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2rdw\" (UniqueName: \"kubernetes.io/projected/752fc038-610b-4c69-a258-06116d49c5d3-kube-api-access-q2rdw\") pod \"busybox\" (UID: \"752fc038-610b-4c69-a258-06116d49c5d3\") " pod="default/busybox"
	
	
	==> storage-provisioner [88d7863cca27a3ec68bf9a9307ffbc3a2e8b856f2d490bfab4bffcb70c505429] <==
	I1101 10:46:27.831834       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:46:27.848210       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:46:27.848342       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:46:27.878959       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:46:27.879484       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"51e990f1-a0af-4cdb-b36a-ecec58b0ed5a", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-245622_c5f9c99b-903b-484a-a3da-edec20bcccc3 became leader
	I1101 10:46:27.881725       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245622_c5f9c99b-903b-484a-a3da-edec20bcccc3!
	I1101 10:46:27.982662       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245622_c5f9c99b-903b-484a-a3da-edec20bcccc3!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-245622 -n old-k8s-version-245622
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-245622 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-245622 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-245622 --alsologtostderr -v=1: exit status 80 (1.949990248s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-245622 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:47:54.616068  476818 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:47:54.616254  476818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:47:54.616264  476818 out.go:374] Setting ErrFile to fd 2...
	I1101 10:47:54.616269  476818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:47:54.616601  476818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:47:54.616976  476818 out.go:368] Setting JSON to false
	I1101 10:47:54.617019  476818 mustload.go:66] Loading cluster: old-k8s-version-245622
	I1101 10:47:54.617463  476818 config.go:182] Loaded profile config "old-k8s-version-245622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:47:54.617976  476818 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:47:54.641212  476818 host.go:66] Checking if "old-k8s-version-245622" exists ...
	I1101 10:47:54.641562  476818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:47:54.716848  476818 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:47:54.70711354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:47:54.717618  476818 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-245622 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:47:54.721160  476818 out.go:179] * Pausing node old-k8s-version-245622 ... 
	I1101 10:47:54.724821  476818 host.go:66] Checking if "old-k8s-version-245622" exists ...
	I1101 10:47:54.725203  476818 ssh_runner.go:195] Run: systemctl --version
	I1101 10:47:54.725257  476818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:54.748132  476818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:54.856609  476818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:47:54.870470  476818 pause.go:52] kubelet running: true
	I1101 10:47:54.870544  476818 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:47:55.100772  476818 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:47:55.100864  476818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:47:55.181093  476818 cri.go:89] found id: "b1a51a80d21f8f7f8ba1d74c8ad7ef2ab7b934b22e7e6b778d90a80c64f1f40c"
	I1101 10:47:55.181165  476818 cri.go:89] found id: "1b3246cbd8f5ca61597f0697e184a3779abdd98c8882dfa56bc1eff233eb91f7"
	I1101 10:47:55.181187  476818 cri.go:89] found id: "abbd066627a451cd1a93700efb4085a69a034a4ba5ec1e3aa6a363490f607319"
	I1101 10:47:55.181208  476818 cri.go:89] found id: "161d854de567b94f5c2d993b8ba213ead9931dd70f943c41c660ff3d0f4b9fc5"
	I1101 10:47:55.181246  476818 cri.go:89] found id: "03411a1f4b138c8c725a6a4425f3dfb5b56fa9bd5b1cf0ba2d709f16df5fc3ae"
	I1101 10:47:55.181254  476818 cri.go:89] found id: "7521d7f517bde774e4ae7db3c7fa527b4b635113e737a68e9c588db1e8e80227"
	I1101 10:47:55.181258  476818 cri.go:89] found id: "6bcc06202ec6dfdc8f6841ebe71d51a48215405eae12c71de3ca5b5238bb7214"
	I1101 10:47:55.181261  476818 cri.go:89] found id: "ffc25019ddaa4f34ce35fea177fcd8277a5073c8baf6d86e0373d70389879419"
	I1101 10:47:55.181265  476818 cri.go:89] found id: "d0dec16486a37ef6f1e98204405322aa6db144ec63e0d58b3a5bacb4e12208d0"
	I1101 10:47:55.181271  476818 cri.go:89] found id: "475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f"
	I1101 10:47:55.181274  476818 cri.go:89] found id: "b9f762a0b850c4519e12cfd7ea375cfaf75618638005b8751293904d0528b27d"
	I1101 10:47:55.181277  476818 cri.go:89] found id: ""
	I1101 10:47:55.181327  476818 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:47:55.193511  476818 retry.go:31] will retry after 328.374867ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:47:55Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:47:55.522049  476818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:47:55.536497  476818 pause.go:52] kubelet running: false
	I1101 10:47:55.536610  476818 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:47:55.707977  476818 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:47:55.708055  476818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:47:55.783793  476818 cri.go:89] found id: "b1a51a80d21f8f7f8ba1d74c8ad7ef2ab7b934b22e7e6b778d90a80c64f1f40c"
	I1101 10:47:55.783816  476818 cri.go:89] found id: "1b3246cbd8f5ca61597f0697e184a3779abdd98c8882dfa56bc1eff233eb91f7"
	I1101 10:47:55.783822  476818 cri.go:89] found id: "abbd066627a451cd1a93700efb4085a69a034a4ba5ec1e3aa6a363490f607319"
	I1101 10:47:55.783827  476818 cri.go:89] found id: "161d854de567b94f5c2d993b8ba213ead9931dd70f943c41c660ff3d0f4b9fc5"
	I1101 10:47:55.783831  476818 cri.go:89] found id: "03411a1f4b138c8c725a6a4425f3dfb5b56fa9bd5b1cf0ba2d709f16df5fc3ae"
	I1101 10:47:55.783847  476818 cri.go:89] found id: "7521d7f517bde774e4ae7db3c7fa527b4b635113e737a68e9c588db1e8e80227"
	I1101 10:47:55.783851  476818 cri.go:89] found id: "6bcc06202ec6dfdc8f6841ebe71d51a48215405eae12c71de3ca5b5238bb7214"
	I1101 10:47:55.783854  476818 cri.go:89] found id: "ffc25019ddaa4f34ce35fea177fcd8277a5073c8baf6d86e0373d70389879419"
	I1101 10:47:55.783857  476818 cri.go:89] found id: "d0dec16486a37ef6f1e98204405322aa6db144ec63e0d58b3a5bacb4e12208d0"
	I1101 10:47:55.783868  476818 cri.go:89] found id: "475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f"
	I1101 10:47:55.783875  476818 cri.go:89] found id: "b9f762a0b850c4519e12cfd7ea375cfaf75618638005b8751293904d0528b27d"
	I1101 10:47:55.783878  476818 cri.go:89] found id: ""
	I1101 10:47:55.783928  476818 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:47:55.795739  476818 retry.go:31] will retry after 404.259783ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:47:55Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:47:56.200310  476818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:47:56.219921  476818 pause.go:52] kubelet running: false
	I1101 10:47:56.219996  476818 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:47:56.403323  476818 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:47:56.403423  476818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:47:56.478977  476818 cri.go:89] found id: "b1a51a80d21f8f7f8ba1d74c8ad7ef2ab7b934b22e7e6b778d90a80c64f1f40c"
	I1101 10:47:56.478996  476818 cri.go:89] found id: "1b3246cbd8f5ca61597f0697e184a3779abdd98c8882dfa56bc1eff233eb91f7"
	I1101 10:47:56.479001  476818 cri.go:89] found id: "abbd066627a451cd1a93700efb4085a69a034a4ba5ec1e3aa6a363490f607319"
	I1101 10:47:56.479005  476818 cri.go:89] found id: "161d854de567b94f5c2d993b8ba213ead9931dd70f943c41c660ff3d0f4b9fc5"
	I1101 10:47:56.479009  476818 cri.go:89] found id: "03411a1f4b138c8c725a6a4425f3dfb5b56fa9bd5b1cf0ba2d709f16df5fc3ae"
	I1101 10:47:56.479012  476818 cri.go:89] found id: "7521d7f517bde774e4ae7db3c7fa527b4b635113e737a68e9c588db1e8e80227"
	I1101 10:47:56.479016  476818 cri.go:89] found id: "6bcc06202ec6dfdc8f6841ebe71d51a48215405eae12c71de3ca5b5238bb7214"
	I1101 10:47:56.479019  476818 cri.go:89] found id: "ffc25019ddaa4f34ce35fea177fcd8277a5073c8baf6d86e0373d70389879419"
	I1101 10:47:56.479023  476818 cri.go:89] found id: "d0dec16486a37ef6f1e98204405322aa6db144ec63e0d58b3a5bacb4e12208d0"
	I1101 10:47:56.479029  476818 cri.go:89] found id: "475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f"
	I1101 10:47:56.479033  476818 cri.go:89] found id: "b9f762a0b850c4519e12cfd7ea375cfaf75618638005b8751293904d0528b27d"
	I1101 10:47:56.479036  476818 cri.go:89] found id: ""
	I1101 10:47:56.479083  476818 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:47:56.494010  476818 out.go:203] 
	W1101 10:47:56.497023  476818 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:47:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:47:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:47:56.497041  476818 out.go:285] * 
	* 
	W1101 10:47:56.502499  476818 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:47:56.505503  476818 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-245622 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-245622
helpers_test.go:243: (dbg) docker inspect old-k8s-version-245622:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3",
	        "Created": "2025-11-01T10:45:35.000054348Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 474705,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:46:55.451826202Z",
	            "FinishedAt": "2025-11-01T10:46:54.60786023Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/hosts",
	        "LogPath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3-json.log",
	        "Name": "/old-k8s-version-245622",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-245622:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-245622",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3",
	                "LowerDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-245622",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-245622/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-245622",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-245622",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-245622",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e04cfcc14553c87638ae07b8c47082312c90a3a6449a7d680b4557d1c54aa2a5",
	            "SandboxKey": "/var/run/docker/netns/e04cfcc14553",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-245622": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:48:c9:1a:5b:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "886e51d4881a05bd8806566eef0c793a83105f195753997f1581ba0395c0dfba",
	                    "EndpointID": "6a7cf7778a1b531de25edaa1d1f251932349261c6f7f669e20010a717430dad8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-245622",
	                        "c9c5181d464a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-245622 -n old-k8s-version-245622
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-245622 -n old-k8s-version-245622: exit status 2 (359.829468ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-245622 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-245622 logs -n 25: (1.396475752s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-883951 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo containerd config dump                                                                                                                                                                                                  │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo crio config                                                                                                                                                                                                             │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ delete  │ -p cilium-883951                                                                                                                                                                                                                              │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p force-systemd-env-555657 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-555657  │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ delete  │ -p kubernetes-upgrade-946953                                                                                                                                                                                                                  │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ delete  │ -p force-systemd-env-555657                                                                                                                                                                                                                   │ force-systemd-env-555657  │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-308600    │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p cert-options-186677 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:45 UTC │
	│ ssh     │ cert-options-186677 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ ssh     │ -p cert-options-186677 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ delete  │ -p cert-options-186677                                                                                                                                                                                                                        │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-245622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │                     │
	│ stop    │ -p old-k8s-version-245622 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-245622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:47 UTC │
	│ image   │ old-k8s-version-245622 image list --format=json                                                                                                                                                                                               │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │ 01 Nov 25 10:47 UTC │
	│ pause   │ -p old-k8s-version-245622 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:46:55
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:46:55.170720  474577 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:46:55.170839  474577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:46:55.170875  474577 out.go:374] Setting ErrFile to fd 2...
	I1101 10:46:55.170890  474577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:46:55.171175  474577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:46:55.171570  474577 out.go:368] Setting JSON to false
	I1101 10:46:55.172520  474577 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8967,"bootTime":1761985048,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:46:55.172598  474577 start.go:143] virtualization:  
	I1101 10:46:55.175760  474577 out.go:179] * [old-k8s-version-245622] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:46:55.179989  474577 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:46:55.180026  474577 notify.go:221] Checking for updates...
	I1101 10:46:55.186589  474577 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:46:55.189633  474577 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:46:55.192634  474577 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:46:55.195667  474577 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:46:55.198818  474577 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:46:55.202411  474577 config.go:182] Loaded profile config "old-k8s-version-245622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:46:55.206038  474577 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 10:46:55.208971  474577 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:46:55.245144  474577 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:46:55.245282  474577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:46:55.301503  474577 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:46:55.291915478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:46:55.301610  474577 docker.go:319] overlay module found
	I1101 10:46:55.304657  474577 out.go:179] * Using the docker driver based on existing profile
	I1101 10:46:55.307460  474577 start.go:309] selected driver: docker
	I1101 10:46:55.307482  474577 start.go:930] validating driver "docker" against &{Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:46:55.307590  474577 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:46:55.308375  474577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:46:55.362085  474577 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:46:55.353144743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:46:55.362436  474577 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:46:55.362472  474577 cni.go:84] Creating CNI manager for ""
	I1101 10:46:55.362533  474577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:46:55.362572  474577 start.go:353] cluster config:
	{Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:46:55.367597  474577 out.go:179] * Starting "old-k8s-version-245622" primary control-plane node in "old-k8s-version-245622" cluster
	I1101 10:46:55.370501  474577 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:46:55.373456  474577 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:46:55.376281  474577 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:46:55.376338  474577 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 10:46:55.376352  474577 cache.go:59] Caching tarball of preloaded images
	I1101 10:46:55.376382  474577 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:46:55.376479  474577 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:46:55.376489  474577 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 10:46:55.376612  474577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/config.json ...
	I1101 10:46:55.396988  474577 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:46:55.397013  474577 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:46:55.397031  474577 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:46:55.397055  474577 start.go:360] acquireMachinesLock for old-k8s-version-245622: {Name:mkfbe1634de833e16a5a7580b9fd5f9c75eacf88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:46:55.397129  474577 start.go:364] duration metric: took 47.262µs to acquireMachinesLock for "old-k8s-version-245622"
	I1101 10:46:55.397153  474577 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:46:55.397159  474577 fix.go:54] fixHost starting: 
	I1101 10:46:55.397426  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:46:55.414267  474577 fix.go:112] recreateIfNeeded on old-k8s-version-245622: state=Stopped err=<nil>
	W1101 10:46:55.414302  474577 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:46:55.417787  474577 out.go:252] * Restarting existing docker container for "old-k8s-version-245622" ...
	I1101 10:46:55.417898  474577 cli_runner.go:164] Run: docker start old-k8s-version-245622
	I1101 10:46:55.710043  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:46:55.735249  474577 kic.go:430] container "old-k8s-version-245622" state is running.
	I1101 10:46:55.735642  474577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-245622
	I1101 10:46:55.755648  474577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/config.json ...
	I1101 10:46:55.756263  474577 machine.go:94] provisionDockerMachine start ...
	I1101 10:46:55.756367  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:55.775530  474577 main.go:143] libmachine: Using SSH client type: native
	I1101 10:46:55.775878  474577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1101 10:46:55.775889  474577 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:46:55.776558  474577 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:46:58.928577  474577 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-245622
	
	I1101 10:46:58.928601  474577 ubuntu.go:182] provisioning hostname "old-k8s-version-245622"
	I1101 10:46:58.928663  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:58.953079  474577 main.go:143] libmachine: Using SSH client type: native
	I1101 10:46:58.953386  474577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1101 10:46:58.953403  474577 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-245622 && echo "old-k8s-version-245622" | sudo tee /etc/hostname
	I1101 10:46:59.114998  474577 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-245622
	
	I1101 10:46:59.115108  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:59.133238  474577 main.go:143] libmachine: Using SSH client type: native
	I1101 10:46:59.133567  474577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1101 10:46:59.133590  474577 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-245622' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-245622/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-245622' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:46:59.289348  474577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:46:59.289442  474577 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:46:59.289514  474577 ubuntu.go:190] setting up certificates
	I1101 10:46:59.289546  474577 provision.go:84] configureAuth start
	I1101 10:46:59.289658  474577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-245622
	I1101 10:46:59.307200  474577 provision.go:143] copyHostCerts
	I1101 10:46:59.307271  474577 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:46:59.307287  474577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:46:59.307365  474577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:46:59.307475  474577 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:46:59.307481  474577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:46:59.307511  474577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:46:59.307568  474577 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:46:59.307573  474577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:46:59.307598  474577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:46:59.307651  474577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-245622 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-245622]
	I1101 10:46:59.672060  474577 provision.go:177] copyRemoteCerts
	I1101 10:46:59.672150  474577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:46:59.672233  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:59.689796  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:46:59.796720  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:46:59.813671  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:46:59.832614  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:46:59.852871  474577 provision.go:87] duration metric: took 563.295826ms to configureAuth
	I1101 10:46:59.852895  474577 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:46:59.853122  474577 config.go:182] Loaded profile config "old-k8s-version-245622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:46:59.853224  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:59.871398  474577 main.go:143] libmachine: Using SSH client type: native
	I1101 10:46:59.871760  474577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1101 10:46:59.871775  474577 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:47:00.500539  474577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:47:00.500574  474577 machine.go:97] duration metric: took 4.744294476s to provisionDockerMachine
	I1101 10:47:00.500587  474577 start.go:293] postStartSetup for "old-k8s-version-245622" (driver="docker")
	I1101 10:47:00.500617  474577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:47:00.500751  474577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:47:00.500829  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:00.527500  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:00.637018  474577 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:47:00.640614  474577 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:47:00.640646  474577 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:47:00.640658  474577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:47:00.640738  474577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:47:00.640854  474577 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:47:00.641031  474577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:47:00.648704  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:47:00.667091  474577 start.go:296] duration metric: took 166.469161ms for postStartSetup
	I1101 10:47:00.667176  474577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:47:00.667245  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:00.685364  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:00.790340  474577 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:47:00.795327  474577 fix.go:56] duration metric: took 5.398147287s for fixHost
	I1101 10:47:00.795354  474577 start.go:83] releasing machines lock for "old-k8s-version-245622", held for 5.398212617s
	I1101 10:47:00.795434  474577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-245622
	I1101 10:47:00.811946  474577 ssh_runner.go:195] Run: cat /version.json
	I1101 10:47:00.812008  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:00.812270  474577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:47:00.812324  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:00.830990  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:00.831147  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:01.029987  474577 ssh_runner.go:195] Run: systemctl --version
	I1101 10:47:01.036374  474577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:47:01.075618  474577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:47:01.081313  474577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:47:01.081392  474577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:47:01.089600  474577 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:47:01.089634  474577 start.go:496] detecting cgroup driver to use...
	I1101 10:47:01.089668  474577 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:47:01.089723  474577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:47:01.109739  474577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:47:01.123919  474577 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:47:01.124050  474577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:47:01.142035  474577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:47:01.155863  474577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:47:01.280281  474577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:47:01.410774  474577 docker.go:234] disabling docker service ...
	I1101 10:47:01.410894  474577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:47:01.426580  474577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:47:01.441982  474577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:47:01.554067  474577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:47:01.667257  474577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:47:01.680575  474577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:47:01.696497  474577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:47:01.696565  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.706214  474577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:47:01.706296  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.715497  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.725051  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.733843  474577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:47:01.742680  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.752070  474577 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.760791  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.769951  474577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:47:01.778118  474577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:47:01.785963  474577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:47:01.907388  474577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:47:02.053052  474577 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:47:02.053172  474577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:47:02.057315  474577 start.go:564] Will wait 60s for crictl version
	I1101 10:47:02.057430  474577 ssh_runner.go:195] Run: which crictl
	I1101 10:47:02.061132  474577 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:47:02.086466  474577 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:47:02.086606  474577 ssh_runner.go:195] Run: crio --version
	I1101 10:47:02.124029  474577 ssh_runner.go:195] Run: crio --version
	I1101 10:47:02.156009  474577 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 10:47:02.158870  474577 cli_runner.go:164] Run: docker network inspect old-k8s-version-245622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:47:02.175998  474577 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:47:02.180110  474577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:47:02.190429  474577 kubeadm.go:884] updating cluster {Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:47:02.190559  474577 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:47:02.190611  474577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:47:02.228473  474577 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:47:02.228498  474577 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:47:02.228558  474577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:47:02.257464  474577 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:47:02.257488  474577 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:47:02.257497  474577 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1101 10:47:02.257600  474577 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-245622 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:47:02.257687  474577 ssh_runner.go:195] Run: crio config
	I1101 10:47:02.329281  474577 cni.go:84] Creating CNI manager for ""
	I1101 10:47:02.329305  474577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:47:02.329322  474577 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:47:02.329345  474577 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-245622 NodeName:old-k8s-version-245622 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:47:02.329489  474577 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-245622"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:47:02.329570  474577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 10:47:02.337568  474577 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:47:02.337656  474577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:47:02.345694  474577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:47:02.359633  474577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:47:02.373086  474577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1101 10:47:02.387746  474577 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:47:02.392278  474577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:47:02.403459  474577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:47:02.526591  474577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:47:02.549907  474577 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622 for IP: 192.168.85.2
	I1101 10:47:02.549928  474577 certs.go:195] generating shared ca certs ...
	I1101 10:47:02.549944  474577 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:47:02.550139  474577 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:47:02.550209  474577 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:47:02.550224  474577 certs.go:257] generating profile certs ...
	I1101 10:47:02.550337  474577 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.key
	I1101 10:47:02.550428  474577 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.key.6a807d81
	I1101 10:47:02.550502  474577 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.key
	I1101 10:47:02.550644  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:47:02.550692  474577 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:47:02.550712  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:47:02.550739  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:47:02.550777  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:47:02.550811  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:47:02.550875  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:47:02.551961  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:47:02.579858  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:47:02.602243  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:47:02.624730  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:47:02.654717  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:47:02.685575  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:47:02.711490  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:47:02.739393  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:47:02.760322  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:47:02.778231  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:47:02.804732  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:47:02.824879  474577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:47:02.840276  474577 ssh_runner.go:195] Run: openssl version
	I1101 10:47:02.846490  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:47:02.855119  474577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:47:02.858949  474577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:47:02.859015  474577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:47:02.900627  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:47:02.908703  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:47:02.917342  474577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:47:02.921676  474577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:47:02.921773  474577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:47:02.963367  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:47:02.971448  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:47:02.980776  474577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:47:02.984643  474577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:47:02.984730  474577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:47:03.027414  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:47:03.035790  474577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:47:03.040038  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:47:03.082158  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:47:03.129329  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:47:03.198656  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:47:03.287285  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:47:03.385986  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:47:03.468479  474577 kubeadm.go:401] StartCluster: {Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:47:03.468571  474577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:47:03.468651  474577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:47:03.540173  474577 cri.go:89] found id: "7521d7f517bde774e4ae7db3c7fa527b4b635113e737a68e9c588db1e8e80227"
	I1101 10:47:03.540200  474577 cri.go:89] found id: "6bcc06202ec6dfdc8f6841ebe71d51a48215405eae12c71de3ca5b5238bb7214"
	I1101 10:47:03.540206  474577 cri.go:89] found id: "ffc25019ddaa4f34ce35fea177fcd8277a5073c8baf6d86e0373d70389879419"
	I1101 10:47:03.540218  474577 cri.go:89] found id: "d0dec16486a37ef6f1e98204405322aa6db144ec63e0d58b3a5bacb4e12208d0"
	I1101 10:47:03.540224  474577 cri.go:89] found id: ""
	I1101 10:47:03.540275  474577 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:47:03.560823  474577 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:47:03Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:47:03.560896  474577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:47:03.575564  474577 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:47:03.575585  474577 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:47:03.575640  474577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:47:03.587110  474577 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:47:03.587734  474577 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-245622" does not appear in /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:47:03.588049  474577 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-292445/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-245622" cluster setting kubeconfig missing "old-k8s-version-245622" context setting]
	I1101 10:47:03.588492  474577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:47:03.590197  474577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:47:03.602015  474577 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:47:03.602060  474577 kubeadm.go:602] duration metric: took 26.468837ms to restartPrimaryControlPlane
	I1101 10:47:03.602070  474577 kubeadm.go:403] duration metric: took 133.602642ms to StartCluster
	I1101 10:47:03.602086  474577 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:47:03.602149  474577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:47:03.603026  474577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:47:03.603238  474577 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:47:03.603539  474577 config.go:182] Loaded profile config "old-k8s-version-245622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:47:03.603589  474577 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:47:03.603658  474577 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-245622"
	I1101 10:47:03.603675  474577 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-245622"
	W1101 10:47:03.603682  474577 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:47:03.603702  474577 host.go:66] Checking if "old-k8s-version-245622" exists ...
	I1101 10:47:03.604379  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:47:03.604446  474577 addons.go:70] Setting dashboard=true in profile "old-k8s-version-245622"
	I1101 10:47:03.604468  474577 addons.go:239] Setting addon dashboard=true in "old-k8s-version-245622"
	W1101 10:47:03.604475  474577 addons.go:248] addon dashboard should already be in state true
	I1101 10:47:03.604506  474577 host.go:66] Checking if "old-k8s-version-245622" exists ...
	I1101 10:47:03.604965  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:47:03.606716  474577 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-245622"
	I1101 10:47:03.606747  474577 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-245622"
	I1101 10:47:03.607030  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:47:03.607163  474577 out.go:179] * Verifying Kubernetes components...
	I1101 10:47:03.610866  474577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:47:03.652103  474577 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-245622"
	W1101 10:47:03.652127  474577 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:47:03.652152  474577 host.go:66] Checking if "old-k8s-version-245622" exists ...
	I1101 10:47:03.658052  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:47:03.664097  474577 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:47:03.667133  474577 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:47:03.670997  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:47:03.671025  474577 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:47:03.671094  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:03.680622  474577 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:47:03.683530  474577 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:47:03.683565  474577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:47:03.683630  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:03.719865  474577 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:47:03.719887  474577 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:47:03.719955  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:03.743401  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:03.746265  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:03.764228  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:03.958556  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:47:03.958630  474577 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:47:03.981455  474577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:47:04.026482  474577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:47:04.041783  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:47:04.041857  474577 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:47:04.047945  474577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:47:04.052082  474577 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-245622" to be "Ready" ...
	I1101 10:47:04.118185  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:47:04.118206  474577 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:47:04.194435  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:47:04.194455  474577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:47:04.266889  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:47:04.266909  474577 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:47:04.338157  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:47:04.338222  474577 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:47:04.361630  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:47:04.361693  474577 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:47:04.381414  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:47:04.381478  474577 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:47:04.403330  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:47:04.403394  474577 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:47:04.431284  474577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:47:07.599942  474577 node_ready.go:49] node "old-k8s-version-245622" is "Ready"
	I1101 10:47:07.599970  474577 node_ready.go:38] duration metric: took 3.547794573s for node "old-k8s-version-245622" to be "Ready" ...
	I1101 10:47:07.599984  474577 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:47:07.600043  474577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:47:09.072448  474577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.045871767s)
	I1101 10:47:09.072554  474577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.024543901s)
	I1101 10:47:09.679271  474577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.247903313s)
	I1101 10:47:09.679522  474577 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.079467628s)
	I1101 10:47:09.679574  474577 api_server.go:72] duration metric: took 6.076302364s to wait for apiserver process to appear ...
	I1101 10:47:09.679617  474577 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:47:09.679647  474577 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:47:09.682714  474577 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-245622 addons enable metrics-server
	
	I1101 10:47:09.685690  474577 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 10:47:09.688943  474577 addons.go:515] duration metric: took 6.085353082s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 10:47:09.689733  474577 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:47:09.691380  474577 api_server.go:141] control plane version: v1.28.0
	I1101 10:47:09.691402  474577 api_server.go:131] duration metric: took 11.765453ms to wait for apiserver health ...
	I1101 10:47:09.691411  474577 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:47:09.696700  474577 system_pods.go:59] 8 kube-system pods found
	I1101 10:47:09.696786  474577 system_pods.go:61] "coredns-5dd5756b68-nd9sf" [76f49986-bf1b-48c2-bb9f-5f1b915e6e21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:47:09.696811  474577 system_pods.go:61] "etcd-old-k8s-version-245622" [1b2e4029-b16e-4cf3-83e1-522a86cca55a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:47:09.696843  474577 system_pods.go:61] "kindnet-sp8fr" [8f85928a-8197-42d1-99ff-3e8aacda2af7] Running
	I1101 10:47:09.696869  474577 system_pods.go:61] "kube-apiserver-old-k8s-version-245622" [d737bb9d-f43a-4223-8278-d59ffcf24352] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:47:09.696893  474577 system_pods.go:61] "kube-controller-manager-old-k8s-version-245622" [ae4aeb88-5b07-4cbf-a840-9cfcd5558ea6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:47:09.696989  474577 system_pods.go:61] "kube-proxy-pkwrv" [f11eb6ad-8629-41f3-bf76-3ce65cfff91d] Running
	I1101 10:47:09.697025  474577 system_pods.go:61] "kube-scheduler-old-k8s-version-245622" [03c58ed7-ece8-4f7d-95ae-8b961f82f82b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:47:09.697045  474577 system_pods.go:61] "storage-provisioner" [4656f817-ef7d-49e6-847a-8bb2f430bf1c] Running
	I1101 10:47:09.697069  474577 system_pods.go:74] duration metric: took 5.648808ms to wait for pod list to return data ...
	I1101 10:47:09.697099  474577 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:47:09.700495  474577 default_sa.go:45] found service account: "default"
	I1101 10:47:09.700561  474577 default_sa.go:55] duration metric: took 3.437948ms for default service account to be created ...
	I1101 10:47:09.700587  474577 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:47:09.704693  474577 system_pods.go:86] 8 kube-system pods found
	I1101 10:47:09.704769  474577 system_pods.go:89] "coredns-5dd5756b68-nd9sf" [76f49986-bf1b-48c2-bb9f-5f1b915e6e21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:47:09.704793  474577 system_pods.go:89] "etcd-old-k8s-version-245622" [1b2e4029-b16e-4cf3-83e1-522a86cca55a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:47:09.704815  474577 system_pods.go:89] "kindnet-sp8fr" [8f85928a-8197-42d1-99ff-3e8aacda2af7] Running
	I1101 10:47:09.704839  474577 system_pods.go:89] "kube-apiserver-old-k8s-version-245622" [d737bb9d-f43a-4223-8278-d59ffcf24352] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:47:09.704881  474577 system_pods.go:89] "kube-controller-manager-old-k8s-version-245622" [ae4aeb88-5b07-4cbf-a840-9cfcd5558ea6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:47:09.704992  474577 system_pods.go:89] "kube-proxy-pkwrv" [f11eb6ad-8629-41f3-bf76-3ce65cfff91d] Running
	I1101 10:47:09.705027  474577 system_pods.go:89] "kube-scheduler-old-k8s-version-245622" [03c58ed7-ece8-4f7d-95ae-8b961f82f82b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:47:09.705047  474577 system_pods.go:89] "storage-provisioner" [4656f817-ef7d-49e6-847a-8bb2f430bf1c] Running
	I1101 10:47:09.705071  474577 system_pods.go:126] duration metric: took 4.4641ms to wait for k8s-apps to be running ...
	I1101 10:47:09.705093  474577 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:47:09.705168  474577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:47:09.719874  474577 system_svc.go:56] duration metric: took 14.771775ms WaitForService to wait for kubelet
	I1101 10:47:09.719943  474577 kubeadm.go:587] duration metric: took 6.116670179s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:47:09.719981  474577 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:47:09.722981  474577 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:47:09.723052  474577 node_conditions.go:123] node cpu capacity is 2
	I1101 10:47:09.723082  474577 node_conditions.go:105] duration metric: took 3.078445ms to run NodePressure ...
	I1101 10:47:09.723108  474577 start.go:242] waiting for startup goroutines ...
	I1101 10:47:09.723138  474577 start.go:247] waiting for cluster config update ...
	I1101 10:47:09.723167  474577 start.go:256] writing updated cluster config ...
	I1101 10:47:09.723472  474577 ssh_runner.go:195] Run: rm -f paused
	I1101 10:47:09.728014  474577 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:47:09.732838  474577 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-nd9sf" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:47:11.738979  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:14.239567  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:16.738993  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:18.739794  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:20.745475  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:23.239581  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:25.241873  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:27.740444  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:29.740596  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:31.745534  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:34.242129  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:36.739619  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:39.238484  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	I1101 10:47:40.238593  474577 pod_ready.go:94] pod "coredns-5dd5756b68-nd9sf" is "Ready"
	I1101 10:47:40.238623  474577 pod_ready.go:86] duration metric: took 30.50571828s for pod "coredns-5dd5756b68-nd9sf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.242014  474577 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.247735  474577 pod_ready.go:94] pod "etcd-old-k8s-version-245622" is "Ready"
	I1101 10:47:40.247764  474577 pod_ready.go:86] duration metric: took 5.72432ms for pod "etcd-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.250898  474577 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.256181  474577 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-245622" is "Ready"
	I1101 10:47:40.256209  474577 pod_ready.go:86] duration metric: took 5.281535ms for pod "kube-apiserver-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.260237  474577 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.436438  474577 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-245622" is "Ready"
	I1101 10:47:40.436466  474577 pod_ready.go:86] duration metric: took 176.200719ms for pod "kube-controller-manager-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.637399  474577 pod_ready.go:83] waiting for pod "kube-proxy-pkwrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:41.037142  474577 pod_ready.go:94] pod "kube-proxy-pkwrv" is "Ready"
	I1101 10:47:41.037173  474577 pod_ready.go:86] duration metric: took 399.743301ms for pod "kube-proxy-pkwrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:41.236985  474577 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:41.636991  474577 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-245622" is "Ready"
	I1101 10:47:41.637074  474577 pod_ready.go:86] duration metric: took 400.0587ms for pod "kube-scheduler-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:41.637096  474577 pod_ready.go:40] duration metric: took 31.908997615s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:47:41.695097  474577 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1101 10:47:41.698223  474577 out.go:203] 
	W1101 10:47:41.701117  474577 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:47:41.703859  474577 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:47:41.706869  474577 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-245622" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.6872335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.694986845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.695798365Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.710686054Z" level=info msg="Created container 475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p/dashboard-metrics-scraper" id=0c03360f-d07b-48aa-90e8-3342cbe999a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.711930619Z" level=info msg="Starting container: 475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f" id=552f31d2-cfaf-4b3b-9595-344d8bde370c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.715993919Z" level=info msg="Started container" PID=1669 containerID=475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p/dashboard-metrics-scraper id=552f31d2-cfaf-4b3b-9595-344d8bde370c name=/runtime.v1.RuntimeService/StartContainer sandboxID=dbfc33fdcc57d3ba3310b3db45b4dc5076ec11f76305a417fbf5754ab9aa340e
	Nov 01 10:47:42 old-k8s-version-245622 conmon[1667]: conmon 475ddbab4788c5e5c6ec <ninfo>: container 1669 exited with status 1
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.909680818Z" level=info msg="Removing container: 08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52" id=e8c3f879-c8ef-4495-8446-bedf2e536a5e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.918333379Z" level=info msg="Error loading conmon cgroup of container 08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52: cgroup deleted" id=e8c3f879-c8ef-4495-8446-bedf2e536a5e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.921602046Z" level=info msg="Removed container 08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p/dashboard-metrics-scraper" id=e8c3f879-c8ef-4495-8446-bedf2e536a5e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.626407765Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.634004366Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.634043726Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.63406972Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.637548178Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.637583559Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.63760911Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.640774661Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.640811306Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.640834166Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.645109357Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.645145082Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.645169271Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.650952809Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.65098924Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	475ddbab4788c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   dbfc33fdcc57d       dashboard-metrics-scraper-5f989dc9cf-4mb2p       kubernetes-dashboard
	b1a51a80d21f8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   7c1466b959d21       storage-provisioner                              kube-system
	b9f762a0b850c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   26 seconds ago      Running             kubernetes-dashboard        0                   64170cdec1e41       kubernetes-dashboard-8694d4445c-dwp8b            kubernetes-dashboard
	1b3246cbd8f5c       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           49 seconds ago      Running             coredns                     1                   cd433544cadd5       coredns-5dd5756b68-nd9sf                         kube-system
	307b1c5398717       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   707654d49d625       busybox                                          default
	abbd066627a45       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   242639569cfc2       kindnet-sp8fr                                    kube-system
	161d854de567b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           49 seconds ago      Running             kube-proxy                  1                   74793c559b9eb       kube-proxy-pkwrv                                 kube-system
	03411a1f4b138       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   7c1466b959d21       storage-provisioner                              kube-system
	7521d7f517bde       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           54 seconds ago      Running             kube-scheduler              1                   c34c9b6140d64       kube-scheduler-old-k8s-version-245622            kube-system
	6bcc06202ec6d       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           54 seconds ago      Running             kube-controller-manager     1                   18202adfa02ad       kube-controller-manager-old-k8s-version-245622   kube-system
	ffc25019ddaa4       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           54 seconds ago      Running             kube-apiserver              1                   a65e2ca96887e       kube-apiserver-old-k8s-version-245622            kube-system
	d0dec16486a37       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           54 seconds ago      Running             etcd                        1                   e9f6cebfac098       etcd-old-k8s-version-245622                      kube-system
	
	
	==> coredns [1b3246cbd8f5ca61597f0697e184a3779abdd98c8882dfa56bc1eff233eb91f7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57823 - 34494 "HINFO IN 5685477747124260656.5449520289518855655. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004230604s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-245622
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-245622
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=old-k8s-version-245622
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_46_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:45:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-245622
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:47:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:47:38 +0000   Sat, 01 Nov 2025 10:45:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:47:38 +0000   Sat, 01 Nov 2025 10:45:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:47:38 +0000   Sat, 01 Nov 2025 10:45:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:47:38 +0000   Sat, 01 Nov 2025 10:46:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-245622
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d68081b6-bca0-4e35-910f-cc1a79899cef
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-nd9sf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-old-k8s-version-245622                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-sp8fr                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-245622             250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-245622    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-pkwrv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-245622             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4mb2p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dwp8b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x9 over 2m5s)  kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-245622 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node old-k8s-version-245622 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node old-k8s-version-245622 event: Registered Node old-k8s-version-245622 in Controller
	  Normal  NodeReady                90s                  kubelet          Node old-k8s-version-245622 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node old-k8s-version-245622 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-245622 event: Registered Node old-k8s-version-245622 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[ +37.261841] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d0dec16486a37ef6f1e98204405322aa6db144ec63e0d58b3a5bacb4e12208d0] <==
	{"level":"info","ts":"2025-11-01T10:47:03.605725Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:47:03.600727Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-11-01T10:47:03.600852Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:47:03.628121Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:47:03.628184Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:47:03.60126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-01T10:47:03.628463Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-01T10:47:03.628618Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:47:03.628679Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:47:03.60064Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T10:47:03.630626Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T10:47:04.728952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T10:47:04.729064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:47:04.729121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T10:47:04.729163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T10:47:04.729201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T10:47:04.729241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-01T10:47:04.729272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T10:47:04.733162Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-245622 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:47:04.733252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:47:04.734275Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-01T10:47:04.736603Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:47:04.737534Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:47:04.741222Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:47:04.741274Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:47:57 up  2:30,  0 user,  load average: 1.90, 2.92, 2.56
	Linux old-k8s-version-245622 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [abbd066627a451cd1a93700efb4085a69a034a4ba5ec1e3aa6a363490f607319] <==
	I1101 10:47:08.425750       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:47:08.426269       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:47:08.426399       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:47:08.426410       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:47:08.426420       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:47:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:47:08.626026       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:47:08.626051       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:47:08.626068       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:47:08.626919       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:47:38.626577       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:47:38.626581       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:47:38.626702       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 10:47:38.627963       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 10:47:40.226217       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:47:40.226245       1 metrics.go:72] Registering metrics
	I1101 10:47:40.226310       1 controller.go:711] "Syncing nftables rules"
	I1101 10:47:48.626092       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:47:48.626131       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ffc25019ddaa4f34ce35fea177fcd8277a5073c8baf6d86e0373d70389879419] <==
	I1101 10:47:07.638559       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 10:47:07.686587       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 10:47:07.693428       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:47:07.718122       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 10:47:07.735869       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 10:47:07.735896       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 10:47:07.736032       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 10:47:07.736071       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:47:07.739736       1 aggregator.go:166] initial CRD sync complete...
	I1101 10:47:07.739757       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 10:47:07.739765       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:47:07.739771       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:47:07.788454       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1101 10:47:07.889217       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:47:08.327604       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:47:09.438509       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:47:09.490685       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:47:09.525632       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:47:09.543267       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:47:09.566157       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:47:09.643281       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.49.197"}
	I1101 10:47:09.671707       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.201.37"}
	I1101 10:47:19.874706       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 10:47:20.126291       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 10:47:20.275194       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6bcc06202ec6dfdc8f6841ebe71d51a48215405eae12c71de3ca5b5238bb7214] <==
	I1101 10:47:19.970502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.359075ms"
	I1101 10:47:19.970667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.453µs"
	I1101 10:47:19.970728       1 shared_informer.go:318] Caches are synced for HPA
	I1101 10:47:19.979240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="2.418476ms"
	I1101 10:47:19.979525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="23.603882ms"
	I1101 10:47:19.979659       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.88µs"
	I1101 10:47:19.999014       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:47:19.999787       1 shared_informer.go:318] Caches are synced for job
	I1101 10:47:20.005984       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.06µs"
	I1101 10:47:20.020165       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1101 10:47:20.077209       1 shared_informer.go:318] Caches are synced for cronjob
	I1101 10:47:20.080524       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:47:20.279401       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1101 10:47:20.430493       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:47:20.430528       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:47:20.437632       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:47:26.866372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.114µs"
	I1101 10:47:27.888881       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.465µs"
	I1101 10:47:28.888692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.221µs"
	I1101 10:47:31.911426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.651646ms"
	I1101 10:47:31.911602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.32µs"
	I1101 10:47:39.838308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.934676ms"
	I1101 10:47:39.838634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.799µs"
	I1101 10:47:42.931775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.557µs"
	I1101 10:47:51.764710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.767µs"
	
	
	==> kube-proxy [161d854de567b94f5c2d993b8ba213ead9931dd70f943c41c660ff3d0f4b9fc5] <==
	I1101 10:47:08.572497       1 server_others.go:69] "Using iptables proxy"
	I1101 10:47:08.606257       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1101 10:47:08.654712       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:47:08.656652       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:47:08.656745       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:47:08.656780       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:47:08.656851       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:47:08.657107       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:47:08.657310       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:47:08.658016       1 config.go:188] "Starting service config controller"
	I1101 10:47:08.658088       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:47:08.658130       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:47:08.658157       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:47:08.658646       1 config.go:315] "Starting node config controller"
	I1101 10:47:08.658692       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:47:08.758767       1 shared_informer.go:318] Caches are synced for service config
	I1101 10:47:08.758842       1 shared_informer.go:318] Caches are synced for node config
	I1101 10:47:08.758858       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7521d7f517bde774e4ae7db3c7fa527b4b635113e737a68e9c588db1e8e80227] <==
	I1101 10:47:05.872650       1 serving.go:348] Generated self-signed cert in-memory
	W1101 10:47:07.581517       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:47:07.581554       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:47:07.581567       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:47:07.581583       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:47:07.647546       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 10:47:07.647593       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:47:07.653305       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 10:47:07.653443       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:47:07.653458       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 10:47:07.653477       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 10:47:07.753559       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:47:21 old-k8s-version-245622 kubelet[775]: E1101 10:47:21.100584     775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c8be916-2557-497f-a083-209059ecd4e4-kube-api-access-jg5nq podName:2c8be916-2557-497f-a083-209059ecd4e4 nodeName:}" failed. No retries permitted until 2025-11-01 10:47:21.600552353 +0000 UTC m=+19.045875373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jg5nq" (UniqueName: "kubernetes.io/projected/2c8be916-2557-497f-a083-209059ecd4e4-kube-api-access-jg5nq") pod "dashboard-metrics-scraper-5f989dc9cf-4mb2p" (UID: "2c8be916-2557-497f-a083-209059ecd4e4") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:47:21 old-k8s-version-245622 kubelet[775]: E1101 10:47:21.106102     775 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:47:21 old-k8s-version-245622 kubelet[775]: E1101 10:47:21.106143     775 projected.go:198] Error preparing data for projected volume kube-api-access-d6czc for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dwp8b: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:47:21 old-k8s-version-245622 kubelet[775]: E1101 10:47:21.106212     775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/587849a0-79dc-4cc6-93f8-5c57c64fc5f2-kube-api-access-d6czc podName:587849a0-79dc-4cc6-93f8-5c57c64fc5f2 nodeName:}" failed. No retries permitted until 2025-11-01 10:47:21.606190142 +0000 UTC m=+19.051513170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d6czc" (UniqueName: "kubernetes.io/projected/587849a0-79dc-4cc6-93f8-5c57c64fc5f2-kube-api-access-d6czc") pod "kubernetes-dashboard-8694d4445c-dwp8b" (UID: "587849a0-79dc-4cc6-93f8-5c57c64fc5f2") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:47:21 old-k8s-version-245622 kubelet[775]: W1101 10:47:21.802948     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/crio-64170cdec1e41ec59eb87300657b7874248e53eb5be6bf85b6ef2383c565fc53 WatchSource:0}: Error finding container 64170cdec1e41ec59eb87300657b7874248e53eb5be6bf85b6ef2383c565fc53: Status 404 returned error can't find the container with id 64170cdec1e41ec59eb87300657b7874248e53eb5be6bf85b6ef2383c565fc53
	Nov 01 10:47:26 old-k8s-version-245622 kubelet[775]: I1101 10:47:26.851405     775 scope.go:117] "RemoveContainer" containerID="6135fe59d963240439aa4addd70a0d6a42ba1570a42cc243c74ec00a06f709a4"
	Nov 01 10:47:27 old-k8s-version-245622 kubelet[775]: I1101 10:47:27.861701     775 scope.go:117] "RemoveContainer" containerID="6135fe59d963240439aa4addd70a0d6a42ba1570a42cc243c74ec00a06f709a4"
	Nov 01 10:47:27 old-k8s-version-245622 kubelet[775]: I1101 10:47:27.862014     775 scope.go:117] "RemoveContainer" containerID="08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52"
	Nov 01 10:47:27 old-k8s-version-245622 kubelet[775]: E1101 10:47:27.862286     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4mb2p_kubernetes-dashboard(2c8be916-2557-497f-a083-209059ecd4e4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p" podUID="2c8be916-2557-497f-a083-209059ecd4e4"
	Nov 01 10:47:28 old-k8s-version-245622 kubelet[775]: I1101 10:47:28.866046     775 scope.go:117] "RemoveContainer" containerID="08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52"
	Nov 01 10:47:28 old-k8s-version-245622 kubelet[775]: E1101 10:47:28.867006     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4mb2p_kubernetes-dashboard(2c8be916-2557-497f-a083-209059ecd4e4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p" podUID="2c8be916-2557-497f-a083-209059ecd4e4"
	Nov 01 10:47:31 old-k8s-version-245622 kubelet[775]: I1101 10:47:31.747415     775 scope.go:117] "RemoveContainer" containerID="08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52"
	Nov 01 10:47:31 old-k8s-version-245622 kubelet[775]: E1101 10:47:31.748370     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4mb2p_kubernetes-dashboard(2c8be916-2557-497f-a083-209059ecd4e4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p" podUID="2c8be916-2557-497f-a083-209059ecd4e4"
	Nov 01 10:47:38 old-k8s-version-245622 kubelet[775]: I1101 10:47:38.893882     775 scope.go:117] "RemoveContainer" containerID="03411a1f4b138c8c725a6a4425f3dfb5b56fa9bd5b1cf0ba2d709f16df5fc3ae"
	Nov 01 10:47:38 old-k8s-version-245622 kubelet[775]: I1101 10:47:38.918095     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dwp8b" podStartSLOduration=10.75576953 podCreationTimestamp="2025-11-01 10:47:19 +0000 UTC" firstStartedPulling="2025-11-01 10:47:21.810251786 +0000 UTC m=+19.255574806" lastFinishedPulling="2025-11-01 10:47:30.971757349 +0000 UTC m=+28.417080377" observedRunningTime="2025-11-01 10:47:31.893552917 +0000 UTC m=+29.338875937" watchObservedRunningTime="2025-11-01 10:47:38.917275101 +0000 UTC m=+36.362598129"
	Nov 01 10:47:42 old-k8s-version-245622 kubelet[775]: I1101 10:47:42.682700     775 scope.go:117] "RemoveContainer" containerID="08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52"
	Nov 01 10:47:42 old-k8s-version-245622 kubelet[775]: I1101 10:47:42.907293     775 scope.go:117] "RemoveContainer" containerID="08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52"
	Nov 01 10:47:42 old-k8s-version-245622 kubelet[775]: I1101 10:47:42.907609     775 scope.go:117] "RemoveContainer" containerID="475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f"
	Nov 01 10:47:42 old-k8s-version-245622 kubelet[775]: E1101 10:47:42.907964     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4mb2p_kubernetes-dashboard(2c8be916-2557-497f-a083-209059ecd4e4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p" podUID="2c8be916-2557-497f-a083-209059ecd4e4"
	Nov 01 10:47:51 old-k8s-version-245622 kubelet[775]: I1101 10:47:51.747665     775 scope.go:117] "RemoveContainer" containerID="475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f"
	Nov 01 10:47:51 old-k8s-version-245622 kubelet[775]: E1101 10:47:51.748006     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4mb2p_kubernetes-dashboard(2c8be916-2557-497f-a083-209059ecd4e4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p" podUID="2c8be916-2557-497f-a083-209059ecd4e4"
	Nov 01 10:47:55 old-k8s-version-245622 kubelet[775]: I1101 10:47:55.047212     775 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 10:47:55 old-k8s-version-245622 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:47:55 old-k8s-version-245622 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:47:55 old-k8s-version-245622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b9f762a0b850c4519e12cfd7ea375cfaf75618638005b8751293904d0528b27d] <==
	2025/11/01 10:47:31 Using namespace: kubernetes-dashboard
	2025/11/01 10:47:31 Using in-cluster config to connect to apiserver
	2025/11/01 10:47:31 Using secret token for csrf signing
	2025/11/01 10:47:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:47:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:47:31 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 10:47:31 Generating JWE encryption key
	2025/11/01 10:47:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:47:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:47:32 Initializing JWE encryption key from synchronized object
	2025/11/01 10:47:32 Creating in-cluster Sidecar client
	2025/11/01 10:47:32 Serving insecurely on HTTP port: 9090
	2025/11/01 10:47:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:47:31 Starting overwatch
	
	
	==> storage-provisioner [03411a1f4b138c8c725a6a4425f3dfb5b56fa9bd5b1cf0ba2d709f16df5fc3ae] <==
	I1101 10:47:08.410222       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:47:38.412589       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b1a51a80d21f8f7f8ba1d74c8ad7ef2ab7b934b22e7e6b778d90a80c64f1f40c] <==
	I1101 10:47:38.945175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:47:38.958732       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:47:38.958783       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:47:56.359090       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:47:56.361489       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245622_1f1bcbc9-bcc2-463a-b449-2fbe27f5d9ff!
	I1101 10:47:56.363256       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"51e990f1-a0af-4cdb-b36a-ecec58b0ed5a", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-245622_1f1bcbc9-bcc2-463a-b449-2fbe27f5d9ff became leader
	I1101 10:47:56.461866       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245622_1f1bcbc9-bcc2-463a-b449-2fbe27f5d9ff!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-245622 -n old-k8s-version-245622
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-245622 -n old-k8s-version-245622: exit status 2 (403.861858ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-245622 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-245622
helpers_test.go:243: (dbg) docker inspect old-k8s-version-245622:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3",
	        "Created": "2025-11-01T10:45:35.000054348Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 474705,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:46:55.451826202Z",
	            "FinishedAt": "2025-11-01T10:46:54.60786023Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/hosts",
	        "LogPath": "/var/lib/docker/containers/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3-json.log",
	        "Name": "/old-k8s-version-245622",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-245622:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-245622",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3",
	                "LowerDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9af603e788e70b11de590f8d9f6e46ff7a9b3d8fddfca2c89987cfa84b81eaf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-245622",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-245622/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-245622",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-245622",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-245622",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e04cfcc14553c87638ae07b8c47082312c90a3a6449a7d680b4557d1c54aa2a5",
	            "SandboxKey": "/var/run/docker/netns/e04cfcc14553",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-245622": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:48:c9:1a:5b:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "886e51d4881a05bd8806566eef0c793a83105f195753997f1581ba0395c0dfba",
	                    "EndpointID": "6a7cf7778a1b531de25edaa1d1f251932349261c6f7f669e20010a717430dad8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-245622",
	                        "c9c5181d464a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-245622 -n old-k8s-version-245622
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-245622 -n old-k8s-version-245622: exit status 2 (375.331201ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-245622 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-245622 logs -n 25: (1.746573925s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-883951 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo containerd config dump                                                                                                                                                                                                  │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ ssh     │ -p cilium-883951 sudo crio config                                                                                                                                                                                                             │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ delete  │ -p cilium-883951                                                                                                                                                                                                                              │ cilium-883951             │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p force-systemd-env-555657 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-555657  │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ delete  │ -p kubernetes-upgrade-946953                                                                                                                                                                                                                  │ kubernetes-upgrade-946953 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ delete  │ -p force-systemd-env-555657                                                                                                                                                                                                                   │ force-systemd-env-555657  │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-308600    │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p cert-options-186677 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:45 UTC │
	│ ssh     │ cert-options-186677 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ ssh     │ -p cert-options-186677 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ delete  │ -p cert-options-186677                                                                                                                                                                                                                        │ cert-options-186677       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-245622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │                     │
	│ stop    │ -p old-k8s-version-245622 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-245622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:47 UTC │
	│ image   │ old-k8s-version-245622 image list --format=json                                                                                                                                                                                               │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │ 01 Nov 25 10:47 UTC │
	│ pause   │ -p old-k8s-version-245622 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-245622    │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:46:55
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:46:55.170720  474577 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:46:55.170839  474577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:46:55.170875  474577 out.go:374] Setting ErrFile to fd 2...
	I1101 10:46:55.170890  474577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:46:55.171175  474577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:46:55.171570  474577 out.go:368] Setting JSON to false
	I1101 10:46:55.172520  474577 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8967,"bootTime":1761985048,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:46:55.172598  474577 start.go:143] virtualization:  
	I1101 10:46:55.175760  474577 out.go:179] * [old-k8s-version-245622] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:46:55.179989  474577 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:46:55.180026  474577 notify.go:221] Checking for updates...
	I1101 10:46:55.186589  474577 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:46:55.189633  474577 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:46:55.192634  474577 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:46:55.195667  474577 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:46:55.198818  474577 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:46:55.202411  474577 config.go:182] Loaded profile config "old-k8s-version-245622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:46:55.206038  474577 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 10:46:55.208971  474577 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:46:55.245144  474577 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:46:55.245282  474577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:46:55.301503  474577 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:46:55.291915478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:46:55.301610  474577 docker.go:319] overlay module found
	I1101 10:46:55.304657  474577 out.go:179] * Using the docker driver based on existing profile
	I1101 10:46:55.307460  474577 start.go:309] selected driver: docker
	I1101 10:46:55.307482  474577 start.go:930] validating driver "docker" against &{Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:46:55.307590  474577 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:46:55.308375  474577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:46:55.362085  474577 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:46:55.353144743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:46:55.362436  474577 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:46:55.362472  474577 cni.go:84] Creating CNI manager for ""
	I1101 10:46:55.362533  474577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:46:55.362572  474577 start.go:353] cluster config:
	{Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:46:55.367597  474577 out.go:179] * Starting "old-k8s-version-245622" primary control-plane node in "old-k8s-version-245622" cluster
	I1101 10:46:55.370501  474577 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:46:55.373456  474577 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:46:55.376281  474577 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:46:55.376338  474577 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 10:46:55.376352  474577 cache.go:59] Caching tarball of preloaded images
	I1101 10:46:55.376382  474577 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:46:55.376479  474577 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:46:55.376489  474577 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 10:46:55.376612  474577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/config.json ...
	I1101 10:46:55.396988  474577 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:46:55.397013  474577 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:46:55.397031  474577 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:46:55.397055  474577 start.go:360] acquireMachinesLock for old-k8s-version-245622: {Name:mkfbe1634de833e16a5a7580b9fd5f9c75eacf88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:46:55.397129  474577 start.go:364] duration metric: took 47.262µs to acquireMachinesLock for "old-k8s-version-245622"
	I1101 10:46:55.397153  474577 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:46:55.397159  474577 fix.go:54] fixHost starting: 
	I1101 10:46:55.397426  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:46:55.414267  474577 fix.go:112] recreateIfNeeded on old-k8s-version-245622: state=Stopped err=<nil>
	W1101 10:46:55.414302  474577 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:46:55.417787  474577 out.go:252] * Restarting existing docker container for "old-k8s-version-245622" ...
	I1101 10:46:55.417898  474577 cli_runner.go:164] Run: docker start old-k8s-version-245622
	I1101 10:46:55.710043  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:46:55.735249  474577 kic.go:430] container "old-k8s-version-245622" state is running.
	I1101 10:46:55.735642  474577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-245622
	I1101 10:46:55.755648  474577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/config.json ...
	I1101 10:46:55.756263  474577 machine.go:94] provisionDockerMachine start ...
	I1101 10:46:55.756367  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:55.775530  474577 main.go:143] libmachine: Using SSH client type: native
	I1101 10:46:55.775878  474577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1101 10:46:55.775889  474577 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:46:55.776558  474577 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:46:58.928577  474577 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-245622
	
	I1101 10:46:58.928601  474577 ubuntu.go:182] provisioning hostname "old-k8s-version-245622"
	I1101 10:46:58.928663  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:58.953079  474577 main.go:143] libmachine: Using SSH client type: native
	I1101 10:46:58.953386  474577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1101 10:46:58.953403  474577 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-245622 && echo "old-k8s-version-245622" | sudo tee /etc/hostname
	I1101 10:46:59.114998  474577 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-245622
	
	I1101 10:46:59.115108  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:59.133238  474577 main.go:143] libmachine: Using SSH client type: native
	I1101 10:46:59.133567  474577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1101 10:46:59.133590  474577 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-245622' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-245622/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-245622' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:46:59.289348  474577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:46:59.289442  474577 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:46:59.289514  474577 ubuntu.go:190] setting up certificates
	I1101 10:46:59.289546  474577 provision.go:84] configureAuth start
	I1101 10:46:59.289658  474577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-245622
	I1101 10:46:59.307200  474577 provision.go:143] copyHostCerts
	I1101 10:46:59.307271  474577 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:46:59.307287  474577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:46:59.307365  474577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:46:59.307475  474577 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:46:59.307481  474577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:46:59.307511  474577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:46:59.307568  474577 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:46:59.307573  474577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:46:59.307598  474577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:46:59.307651  474577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-245622 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-245622]
	I1101 10:46:59.672060  474577 provision.go:177] copyRemoteCerts
	I1101 10:46:59.672150  474577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:46:59.672233  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:59.689796  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:46:59.796720  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:46:59.813671  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:46:59.832614  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:46:59.852871  474577 provision.go:87] duration metric: took 563.295826ms to configureAuth
	I1101 10:46:59.852895  474577 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:46:59.853122  474577 config.go:182] Loaded profile config "old-k8s-version-245622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:46:59.853224  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:46:59.871398  474577 main.go:143] libmachine: Using SSH client type: native
	I1101 10:46:59.871760  474577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1101 10:46:59.871775  474577 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:47:00.500539  474577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:47:00.500574  474577 machine.go:97] duration metric: took 4.744294476s to provisionDockerMachine
	I1101 10:47:00.500587  474577 start.go:293] postStartSetup for "old-k8s-version-245622" (driver="docker")
	I1101 10:47:00.500617  474577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:47:00.500751  474577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:47:00.500829  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:00.527500  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:00.637018  474577 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:47:00.640614  474577 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:47:00.640646  474577 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:47:00.640658  474577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:47:00.640738  474577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:47:00.640854  474577 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:47:00.641031  474577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:47:00.648704  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:47:00.667091  474577 start.go:296] duration metric: took 166.469161ms for postStartSetup
	I1101 10:47:00.667176  474577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:47:00.667245  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:00.685364  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:00.790340  474577 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:47:00.795327  474577 fix.go:56] duration metric: took 5.398147287s for fixHost
	I1101 10:47:00.795354  474577 start.go:83] releasing machines lock for "old-k8s-version-245622", held for 5.398212617s
	I1101 10:47:00.795434  474577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-245622
	I1101 10:47:00.811946  474577 ssh_runner.go:195] Run: cat /version.json
	I1101 10:47:00.812008  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:00.812270  474577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:47:00.812324  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:00.830990  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:00.831147  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:01.029987  474577 ssh_runner.go:195] Run: systemctl --version
	I1101 10:47:01.036374  474577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:47:01.075618  474577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:47:01.081313  474577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:47:01.081392  474577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:47:01.089600  474577 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:47:01.089634  474577 start.go:496] detecting cgroup driver to use...
	I1101 10:47:01.089668  474577 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:47:01.089723  474577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:47:01.109739  474577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:47:01.123919  474577 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:47:01.124050  474577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:47:01.142035  474577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:47:01.155863  474577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:47:01.280281  474577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:47:01.410774  474577 docker.go:234] disabling docker service ...
	I1101 10:47:01.410894  474577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:47:01.426580  474577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:47:01.441982  474577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:47:01.554067  474577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:47:01.667257  474577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:47:01.680575  474577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:47:01.696497  474577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:47:01.696565  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.706214  474577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:47:01.706296  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.715497  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.725051  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.733843  474577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:47:01.742680  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.752070  474577 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.760791  474577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:47:01.769951  474577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:47:01.778118  474577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:47:01.785963  474577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:47:01.907388  474577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:47:02.053052  474577 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:47:02.053172  474577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:47:02.057315  474577 start.go:564] Will wait 60s for crictl version
	I1101 10:47:02.057430  474577 ssh_runner.go:195] Run: which crictl
	I1101 10:47:02.061132  474577 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:47:02.086466  474577 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:47:02.086606  474577 ssh_runner.go:195] Run: crio --version
	I1101 10:47:02.124029  474577 ssh_runner.go:195] Run: crio --version
	I1101 10:47:02.156009  474577 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 10:47:02.158870  474577 cli_runner.go:164] Run: docker network inspect old-k8s-version-245622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:47:02.175998  474577 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:47:02.180110  474577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:47:02.190429  474577 kubeadm.go:884] updating cluster {Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:47:02.190559  474577 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:47:02.190611  474577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:47:02.228473  474577 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:47:02.228498  474577 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:47:02.228558  474577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:47:02.257464  474577 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:47:02.257488  474577 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:47:02.257497  474577 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1101 10:47:02.257600  474577 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-245622 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:47:02.257687  474577 ssh_runner.go:195] Run: crio config
	I1101 10:47:02.329281  474577 cni.go:84] Creating CNI manager for ""
	I1101 10:47:02.329305  474577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:47:02.329322  474577 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:47:02.329345  474577 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-245622 NodeName:old-k8s-version-245622 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:47:02.329489  474577 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-245622"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:47:02.329570  474577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 10:47:02.337568  474577 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:47:02.337656  474577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:47:02.345694  474577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:47:02.359633  474577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:47:02.373086  474577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1101 10:47:02.387746  474577 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:47:02.392278  474577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:47:02.403459  474577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:47:02.526591  474577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:47:02.549907  474577 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622 for IP: 192.168.85.2
	I1101 10:47:02.549928  474577 certs.go:195] generating shared ca certs ...
	I1101 10:47:02.549944  474577 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:47:02.550139  474577 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:47:02.550209  474577 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:47:02.550224  474577 certs.go:257] generating profile certs ...
	I1101 10:47:02.550337  474577 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.key
	I1101 10:47:02.550428  474577 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.key.6a807d81
	I1101 10:47:02.550502  474577 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.key
	I1101 10:47:02.550644  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:47:02.550692  474577 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:47:02.550712  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:47:02.550739  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:47:02.550777  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:47:02.550811  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:47:02.550875  474577 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:47:02.551961  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:47:02.579858  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:47:02.602243  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:47:02.624730  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:47:02.654717  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:47:02.685575  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:47:02.711490  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:47:02.739393  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:47:02.760322  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:47:02.778231  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:47:02.804732  474577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:47:02.824879  474577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:47:02.840276  474577 ssh_runner.go:195] Run: openssl version
	I1101 10:47:02.846490  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:47:02.855119  474577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:47:02.858949  474577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:47:02.859015  474577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:47:02.900627  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:47:02.908703  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:47:02.917342  474577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:47:02.921676  474577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:47:02.921773  474577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:47:02.963367  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:47:02.971448  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:47:02.980776  474577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:47:02.984643  474577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:47:02.984730  474577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:47:03.027414  474577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:47:03.035790  474577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:47:03.040038  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:47:03.082158  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:47:03.129329  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:47:03.198656  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:47:03.287285  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:47:03.385986  474577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:47:03.468479  474577 kubeadm.go:401] StartCluster: {Name:old-k8s-version-245622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-245622 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:47:03.468571  474577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:47:03.468651  474577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:47:03.540173  474577 cri.go:89] found id: "7521d7f517bde774e4ae7db3c7fa527b4b635113e737a68e9c588db1e8e80227"
	I1101 10:47:03.540200  474577 cri.go:89] found id: "6bcc06202ec6dfdc8f6841ebe71d51a48215405eae12c71de3ca5b5238bb7214"
	I1101 10:47:03.540206  474577 cri.go:89] found id: "ffc25019ddaa4f34ce35fea177fcd8277a5073c8baf6d86e0373d70389879419"
	I1101 10:47:03.540218  474577 cri.go:89] found id: "d0dec16486a37ef6f1e98204405322aa6db144ec63e0d58b3a5bacb4e12208d0"
	I1101 10:47:03.540224  474577 cri.go:89] found id: ""
	I1101 10:47:03.540275  474577 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:47:03.560823  474577 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:47:03Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:47:03.560896  474577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:47:03.575564  474577 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:47:03.575585  474577 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:47:03.575640  474577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:47:03.587110  474577 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:47:03.587734  474577 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-245622" does not appear in /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:47:03.588049  474577 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-292445/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-245622" cluster setting kubeconfig missing "old-k8s-version-245622" context setting]
	I1101 10:47:03.588492  474577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:47:03.590197  474577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:47:03.602015  474577 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:47:03.602060  474577 kubeadm.go:602] duration metric: took 26.468837ms to restartPrimaryControlPlane
	I1101 10:47:03.602070  474577 kubeadm.go:403] duration metric: took 133.602642ms to StartCluster
	I1101 10:47:03.602086  474577 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:47:03.602149  474577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:47:03.603026  474577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:47:03.603238  474577 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:47:03.603539  474577 config.go:182] Loaded profile config "old-k8s-version-245622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:47:03.603589  474577 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:47:03.603658  474577 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-245622"
	I1101 10:47:03.603675  474577 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-245622"
	W1101 10:47:03.603682  474577 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:47:03.603702  474577 host.go:66] Checking if "old-k8s-version-245622" exists ...
	I1101 10:47:03.604379  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:47:03.604446  474577 addons.go:70] Setting dashboard=true in profile "old-k8s-version-245622"
	I1101 10:47:03.604468  474577 addons.go:239] Setting addon dashboard=true in "old-k8s-version-245622"
	W1101 10:47:03.604475  474577 addons.go:248] addon dashboard should already be in state true
	I1101 10:47:03.604506  474577 host.go:66] Checking if "old-k8s-version-245622" exists ...
	I1101 10:47:03.604965  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:47:03.606716  474577 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-245622"
	I1101 10:47:03.606747  474577 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-245622"
	I1101 10:47:03.607030  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:47:03.607163  474577 out.go:179] * Verifying Kubernetes components...
	I1101 10:47:03.610866  474577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:47:03.652103  474577 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-245622"
	W1101 10:47:03.652127  474577 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:47:03.652152  474577 host.go:66] Checking if "old-k8s-version-245622" exists ...
	I1101 10:47:03.658052  474577 cli_runner.go:164] Run: docker container inspect old-k8s-version-245622 --format={{.State.Status}}
	I1101 10:47:03.664097  474577 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:47:03.667133  474577 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:47:03.670997  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:47:03.671025  474577 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:47:03.671094  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:03.680622  474577 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:47:03.683530  474577 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:47:03.683565  474577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:47:03.683630  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:03.719865  474577 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:47:03.719887  474577 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:47:03.719955  474577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-245622
	I1101 10:47:03.743401  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:03.746265  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:03.764228  474577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/old-k8s-version-245622/id_rsa Username:docker}
	I1101 10:47:03.958556  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:47:03.958630  474577 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:47:03.981455  474577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:47:04.026482  474577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:47:04.041783  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:47:04.041857  474577 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:47:04.047945  474577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:47:04.052082  474577 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-245622" to be "Ready" ...
	I1101 10:47:04.118185  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:47:04.118206  474577 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:47:04.194435  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:47:04.194455  474577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:47:04.266889  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:47:04.266909  474577 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:47:04.338157  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:47:04.338222  474577 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:47:04.361630  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:47:04.361693  474577 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:47:04.381414  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:47:04.381478  474577 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:47:04.403330  474577 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:47:04.403394  474577 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:47:04.431284  474577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:47:07.599942  474577 node_ready.go:49] node "old-k8s-version-245622" is "Ready"
	I1101 10:47:07.599970  474577 node_ready.go:38] duration metric: took 3.547794573s for node "old-k8s-version-245622" to be "Ready" ...
	I1101 10:47:07.599984  474577 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:47:07.600043  474577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:47:09.072448  474577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.045871767s)
	I1101 10:47:09.072554  474577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.024543901s)
	I1101 10:47:09.679271  474577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.247903313s)
	I1101 10:47:09.679522  474577 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.079467628s)
	I1101 10:47:09.679574  474577 api_server.go:72] duration metric: took 6.076302364s to wait for apiserver process to appear ...
	I1101 10:47:09.679617  474577 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:47:09.679647  474577 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:47:09.682714  474577 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-245622 addons enable metrics-server
	
	I1101 10:47:09.685690  474577 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 10:47:09.688943  474577 addons.go:515] duration metric: took 6.085353082s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 10:47:09.689733  474577 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:47:09.691380  474577 api_server.go:141] control plane version: v1.28.0
	I1101 10:47:09.691402  474577 api_server.go:131] duration metric: took 11.765453ms to wait for apiserver health ...
	I1101 10:47:09.691411  474577 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:47:09.696700  474577 system_pods.go:59] 8 kube-system pods found
	I1101 10:47:09.696786  474577 system_pods.go:61] "coredns-5dd5756b68-nd9sf" [76f49986-bf1b-48c2-bb9f-5f1b915e6e21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:47:09.696811  474577 system_pods.go:61] "etcd-old-k8s-version-245622" [1b2e4029-b16e-4cf3-83e1-522a86cca55a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:47:09.696843  474577 system_pods.go:61] "kindnet-sp8fr" [8f85928a-8197-42d1-99ff-3e8aacda2af7] Running
	I1101 10:47:09.696869  474577 system_pods.go:61] "kube-apiserver-old-k8s-version-245622" [d737bb9d-f43a-4223-8278-d59ffcf24352] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:47:09.696893  474577 system_pods.go:61] "kube-controller-manager-old-k8s-version-245622" [ae4aeb88-5b07-4cbf-a840-9cfcd5558ea6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:47:09.696989  474577 system_pods.go:61] "kube-proxy-pkwrv" [f11eb6ad-8629-41f3-bf76-3ce65cfff91d] Running
	I1101 10:47:09.697025  474577 system_pods.go:61] "kube-scheduler-old-k8s-version-245622" [03c58ed7-ece8-4f7d-95ae-8b961f82f82b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:47:09.697045  474577 system_pods.go:61] "storage-provisioner" [4656f817-ef7d-49e6-847a-8bb2f430bf1c] Running
	I1101 10:47:09.697069  474577 system_pods.go:74] duration metric: took 5.648808ms to wait for pod list to return data ...
	I1101 10:47:09.697099  474577 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:47:09.700495  474577 default_sa.go:45] found service account: "default"
	I1101 10:47:09.700561  474577 default_sa.go:55] duration metric: took 3.437948ms for default service account to be created ...
	I1101 10:47:09.700587  474577 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:47:09.704693  474577 system_pods.go:86] 8 kube-system pods found
	I1101 10:47:09.704769  474577 system_pods.go:89] "coredns-5dd5756b68-nd9sf" [76f49986-bf1b-48c2-bb9f-5f1b915e6e21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:47:09.704793  474577 system_pods.go:89] "etcd-old-k8s-version-245622" [1b2e4029-b16e-4cf3-83e1-522a86cca55a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:47:09.704815  474577 system_pods.go:89] "kindnet-sp8fr" [8f85928a-8197-42d1-99ff-3e8aacda2af7] Running
	I1101 10:47:09.704839  474577 system_pods.go:89] "kube-apiserver-old-k8s-version-245622" [d737bb9d-f43a-4223-8278-d59ffcf24352] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:47:09.704881  474577 system_pods.go:89] "kube-controller-manager-old-k8s-version-245622" [ae4aeb88-5b07-4cbf-a840-9cfcd5558ea6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:47:09.704992  474577 system_pods.go:89] "kube-proxy-pkwrv" [f11eb6ad-8629-41f3-bf76-3ce65cfff91d] Running
	I1101 10:47:09.705027  474577 system_pods.go:89] "kube-scheduler-old-k8s-version-245622" [03c58ed7-ece8-4f7d-95ae-8b961f82f82b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:47:09.705047  474577 system_pods.go:89] "storage-provisioner" [4656f817-ef7d-49e6-847a-8bb2f430bf1c] Running
	I1101 10:47:09.705071  474577 system_pods.go:126] duration metric: took 4.4641ms to wait for k8s-apps to be running ...
	I1101 10:47:09.705093  474577 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:47:09.705168  474577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:47:09.719874  474577 system_svc.go:56] duration metric: took 14.771775ms WaitForService to wait for kubelet
	I1101 10:47:09.719943  474577 kubeadm.go:587] duration metric: took 6.116670179s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:47:09.719981  474577 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:47:09.722981  474577 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:47:09.723052  474577 node_conditions.go:123] node cpu capacity is 2
	I1101 10:47:09.723082  474577 node_conditions.go:105] duration metric: took 3.078445ms to run NodePressure ...
	I1101 10:47:09.723108  474577 start.go:242] waiting for startup goroutines ...
	I1101 10:47:09.723138  474577 start.go:247] waiting for cluster config update ...
	I1101 10:47:09.723167  474577 start.go:256] writing updated cluster config ...
	I1101 10:47:09.723472  474577 ssh_runner.go:195] Run: rm -f paused
	I1101 10:47:09.728014  474577 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:47:09.732838  474577 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-nd9sf" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:47:11.738979  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:14.239567  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:16.738993  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:18.739794  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:20.745475  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:23.239581  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:25.241873  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:27.740444  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:29.740596  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:31.745534  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:34.242129  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:36.739619  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	W1101 10:47:39.238484  474577 pod_ready.go:104] pod "coredns-5dd5756b68-nd9sf" is not "Ready", error: <nil>
	I1101 10:47:40.238593  474577 pod_ready.go:94] pod "coredns-5dd5756b68-nd9sf" is "Ready"
	I1101 10:47:40.238623  474577 pod_ready.go:86] duration metric: took 30.50571828s for pod "coredns-5dd5756b68-nd9sf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.242014  474577 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.247735  474577 pod_ready.go:94] pod "etcd-old-k8s-version-245622" is "Ready"
	I1101 10:47:40.247764  474577 pod_ready.go:86] duration metric: took 5.72432ms for pod "etcd-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.250898  474577 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.256181  474577 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-245622" is "Ready"
	I1101 10:47:40.256209  474577 pod_ready.go:86] duration metric: took 5.281535ms for pod "kube-apiserver-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.260237  474577 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.436438  474577 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-245622" is "Ready"
	I1101 10:47:40.436466  474577 pod_ready.go:86] duration metric: took 176.200719ms for pod "kube-controller-manager-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:40.637399  474577 pod_ready.go:83] waiting for pod "kube-proxy-pkwrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:41.037142  474577 pod_ready.go:94] pod "kube-proxy-pkwrv" is "Ready"
	I1101 10:47:41.037173  474577 pod_ready.go:86] duration metric: took 399.743301ms for pod "kube-proxy-pkwrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:41.236985  474577 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:41.636991  474577 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-245622" is "Ready"
	I1101 10:47:41.637074  474577 pod_ready.go:86] duration metric: took 400.0587ms for pod "kube-scheduler-old-k8s-version-245622" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:47:41.637096  474577 pod_ready.go:40] duration metric: took 31.908997615s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:47:41.695097  474577 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1101 10:47:41.698223  474577 out.go:203] 
	W1101 10:47:41.701117  474577 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:47:41.703859  474577 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:47:41.706869  474577 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-245622" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.6872335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.694986845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.695798365Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.710686054Z" level=info msg="Created container 475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p/dashboard-metrics-scraper" id=0c03360f-d07b-48aa-90e8-3342cbe999a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.711930619Z" level=info msg="Starting container: 475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f" id=552f31d2-cfaf-4b3b-9595-344d8bde370c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.715993919Z" level=info msg="Started container" PID=1669 containerID=475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p/dashboard-metrics-scraper id=552f31d2-cfaf-4b3b-9595-344d8bde370c name=/runtime.v1.RuntimeService/StartContainer sandboxID=dbfc33fdcc57d3ba3310b3db45b4dc5076ec11f76305a417fbf5754ab9aa340e
	Nov 01 10:47:42 old-k8s-version-245622 conmon[1667]: conmon 475ddbab4788c5e5c6ec <ninfo>: container 1669 exited with status 1
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.909680818Z" level=info msg="Removing container: 08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52" id=e8c3f879-c8ef-4495-8446-bedf2e536a5e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.918333379Z" level=info msg="Error loading conmon cgroup of container 08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52: cgroup deleted" id=e8c3f879-c8ef-4495-8446-bedf2e536a5e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:47:42 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:42.921602046Z" level=info msg="Removed container 08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p/dashboard-metrics-scraper" id=e8c3f879-c8ef-4495-8446-bedf2e536a5e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.626407765Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.634004366Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.634043726Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.63406972Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.637548178Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.637583559Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.63760911Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.640774661Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.640811306Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.640834166Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.645109357Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.645145082Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.645169271Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.650952809Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:47:48 old-k8s-version-245622 crio[650]: time="2025-11-01T10:47:48.65098924Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	475ddbab4788c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   dbfc33fdcc57d       dashboard-metrics-scraper-5f989dc9cf-4mb2p       kubernetes-dashboard
	b1a51a80d21f8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   7c1466b959d21       storage-provisioner                              kube-system
	b9f762a0b850c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   28 seconds ago      Running             kubernetes-dashboard        0                   64170cdec1e41       kubernetes-dashboard-8694d4445c-dwp8b            kubernetes-dashboard
	1b3246cbd8f5c       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   cd433544cadd5       coredns-5dd5756b68-nd9sf                         kube-system
	307b1c5398717       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   707654d49d625       busybox                                          default
	abbd066627a45       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   242639569cfc2       kindnet-sp8fr                                    kube-system
	161d854de567b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   74793c559b9eb       kube-proxy-pkwrv                                 kube-system
	03411a1f4b138       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   7c1466b959d21       storage-provisioner                              kube-system
	7521d7f517bde       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           56 seconds ago      Running             kube-scheduler              1                   c34c9b6140d64       kube-scheduler-old-k8s-version-245622            kube-system
	6bcc06202ec6d       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           56 seconds ago      Running             kube-controller-manager     1                   18202adfa02ad       kube-controller-manager-old-k8s-version-245622   kube-system
	ffc25019ddaa4       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           56 seconds ago      Running             kube-apiserver              1                   a65e2ca96887e       kube-apiserver-old-k8s-version-245622            kube-system
	d0dec16486a37       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           56 seconds ago      Running             etcd                        1                   e9f6cebfac098       etcd-old-k8s-version-245622                      kube-system
	
	
	==> coredns [1b3246cbd8f5ca61597f0697e184a3779abdd98c8882dfa56bc1eff233eb91f7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57823 - 34494 "HINFO IN 5685477747124260656.5449520289518855655. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004230604s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-245622
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-245622
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=old-k8s-version-245622
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_46_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:45:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-245622
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:47:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:47:38 +0000   Sat, 01 Nov 2025 10:45:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:47:38 +0000   Sat, 01 Nov 2025 10:45:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:47:38 +0000   Sat, 01 Nov 2025 10:45:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:47:38 +0000   Sat, 01 Nov 2025 10:46:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-245622
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d68081b6-bca0-4e35-910f-cc1a79899cef
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-nd9sf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-245622                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-sp8fr                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-245622             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-245622    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-pkwrv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-245622             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4mb2p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dwp8b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s (x9 over 2m8s)  kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-245622 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x7 over 2m8s)  kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-245622 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-245622 event: Registered Node old-k8s-version-245622 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-245622 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node old-k8s-version-245622 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node old-k8s-version-245622 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-245622 event: Registered Node old-k8s-version-245622 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[ +37.261841] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d0dec16486a37ef6f1e98204405322aa6db144ec63e0d58b3a5bacb4e12208d0] <==
	{"level":"info","ts":"2025-11-01T10:47:03.605725Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:47:03.600727Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-11-01T10:47:03.600852Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:47:03.628121Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:47:03.628184Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:47:03.60126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-01T10:47:03.628463Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-01T10:47:03.628618Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:47:03.628679Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:47:03.60064Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T10:47:03.630626Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T10:47:04.728952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T10:47:04.729064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:47:04.729121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T10:47:04.729163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T10:47:04.729201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T10:47:04.729241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-01T10:47:04.729272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T10:47:04.733162Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-245622 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:47:04.733252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:47:04.734275Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-01T10:47:04.736603Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:47:04.737534Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:47:04.741222Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:47:04.741274Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:48:00 up  2:30,  0 user,  load average: 1.90, 2.92, 2.56
	Linux old-k8s-version-245622 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [abbd066627a451cd1a93700efb4085a69a034a4ba5ec1e3aa6a363490f607319] <==
	I1101 10:47:08.425750       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:47:08.426269       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:47:08.426399       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:47:08.426410       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:47:08.426420       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:47:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:47:08.626026       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:47:08.626051       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:47:08.626068       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:47:08.626919       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:47:38.626577       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:47:38.626581       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:47:38.626702       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 10:47:38.627963       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 10:47:40.226217       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:47:40.226245       1 metrics.go:72] Registering metrics
	I1101 10:47:40.226310       1 controller.go:711] "Syncing nftables rules"
	I1101 10:47:48.626092       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:47:48.626131       1 main.go:301] handling current node
	I1101 10:47:58.625739       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:47:58.625802       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ffc25019ddaa4f34ce35fea177fcd8277a5073c8baf6d86e0373d70389879419] <==
	I1101 10:47:07.638559       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 10:47:07.686587       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 10:47:07.693428       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:47:07.718122       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 10:47:07.735869       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 10:47:07.735896       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 10:47:07.736032       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 10:47:07.736071       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:47:07.739736       1 aggregator.go:166] initial CRD sync complete...
	I1101 10:47:07.739757       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 10:47:07.739765       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:47:07.739771       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:47:07.788454       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1101 10:47:07.889217       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:47:08.327604       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:47:09.438509       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:47:09.490685       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:47:09.525632       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:47:09.543267       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:47:09.566157       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:47:09.643281       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.49.197"}
	I1101 10:47:09.671707       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.201.37"}
	I1101 10:47:19.874706       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 10:47:20.126291       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 10:47:20.275194       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6bcc06202ec6dfdc8f6841ebe71d51a48215405eae12c71de3ca5b5238bb7214] <==
	I1101 10:47:19.970502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.359075ms"
	I1101 10:47:19.970667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.453µs"
	I1101 10:47:19.970728       1 shared_informer.go:318] Caches are synced for HPA
	I1101 10:47:19.979240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="2.418476ms"
	I1101 10:47:19.979525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="23.603882ms"
	I1101 10:47:19.979659       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.88µs"
	I1101 10:47:19.999014       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:47:19.999787       1 shared_informer.go:318] Caches are synced for job
	I1101 10:47:20.005984       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.06µs"
	I1101 10:47:20.020165       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1101 10:47:20.077209       1 shared_informer.go:318] Caches are synced for cronjob
	I1101 10:47:20.080524       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:47:20.279401       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1101 10:47:20.430493       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:47:20.430528       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:47:20.437632       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:47:26.866372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.114µs"
	I1101 10:47:27.888881       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.465µs"
	I1101 10:47:28.888692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.221µs"
	I1101 10:47:31.911426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.651646ms"
	I1101 10:47:31.911602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.32µs"
	I1101 10:47:39.838308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.934676ms"
	I1101 10:47:39.838634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.799µs"
	I1101 10:47:42.931775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.557µs"
	I1101 10:47:51.764710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.767µs"
	
	
	==> kube-proxy [161d854de567b94f5c2d993b8ba213ead9931dd70f943c41c660ff3d0f4b9fc5] <==
	I1101 10:47:08.572497       1 server_others.go:69] "Using iptables proxy"
	I1101 10:47:08.606257       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1101 10:47:08.654712       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:47:08.656652       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:47:08.656745       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:47:08.656780       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:47:08.656851       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:47:08.657107       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:47:08.657310       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:47:08.658016       1 config.go:188] "Starting service config controller"
	I1101 10:47:08.658088       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:47:08.658130       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:47:08.658157       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:47:08.658646       1 config.go:315] "Starting node config controller"
	I1101 10:47:08.658692       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:47:08.758767       1 shared_informer.go:318] Caches are synced for service config
	I1101 10:47:08.758842       1 shared_informer.go:318] Caches are synced for node config
	I1101 10:47:08.758858       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7521d7f517bde774e4ae7db3c7fa527b4b635113e737a68e9c588db1e8e80227] <==
	I1101 10:47:05.872650       1 serving.go:348] Generated self-signed cert in-memory
	W1101 10:47:07.581517       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:47:07.581554       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:47:07.581567       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:47:07.581583       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:47:07.647546       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 10:47:07.647593       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:47:07.653305       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 10:47:07.653443       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:47:07.653458       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 10:47:07.653477       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 10:47:07.753559       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:47:21 old-k8s-version-245622 kubelet[775]: E1101 10:47:21.100584     775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c8be916-2557-497f-a083-209059ecd4e4-kube-api-access-jg5nq podName:2c8be916-2557-497f-a083-209059ecd4e4 nodeName:}" failed. No retries permitted until 2025-11-01 10:47:21.600552353 +0000 UTC m=+19.045875373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jg5nq" (UniqueName: "kubernetes.io/projected/2c8be916-2557-497f-a083-209059ecd4e4-kube-api-access-jg5nq") pod "dashboard-metrics-scraper-5f989dc9cf-4mb2p" (UID: "2c8be916-2557-497f-a083-209059ecd4e4") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:47:21 old-k8s-version-245622 kubelet[775]: E1101 10:47:21.106102     775 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:47:21 old-k8s-version-245622 kubelet[775]: E1101 10:47:21.106143     775 projected.go:198] Error preparing data for projected volume kube-api-access-d6czc for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dwp8b: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:47:21 old-k8s-version-245622 kubelet[775]: E1101 10:47:21.106212     775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/587849a0-79dc-4cc6-93f8-5c57c64fc5f2-kube-api-access-d6czc podName:587849a0-79dc-4cc6-93f8-5c57c64fc5f2 nodeName:}" failed. No retries permitted until 2025-11-01 10:47:21.606190142 +0000 UTC m=+19.051513170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d6czc" (UniqueName: "kubernetes.io/projected/587849a0-79dc-4cc6-93f8-5c57c64fc5f2-kube-api-access-d6czc") pod "kubernetes-dashboard-8694d4445c-dwp8b" (UID: "587849a0-79dc-4cc6-93f8-5c57c64fc5f2") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:47:21 old-k8s-version-245622 kubelet[775]: W1101 10:47:21.802948     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9c5181d464a6181465e6f934067c16c0de0c1424de2dac7397e877a81831ef3/crio-64170cdec1e41ec59eb87300657b7874248e53eb5be6bf85b6ef2383c565fc53 WatchSource:0}: Error finding container 64170cdec1e41ec59eb87300657b7874248e53eb5be6bf85b6ef2383c565fc53: Status 404 returned error can't find the container with id 64170cdec1e41ec59eb87300657b7874248e53eb5be6bf85b6ef2383c565fc53
	Nov 01 10:47:26 old-k8s-version-245622 kubelet[775]: I1101 10:47:26.851405     775 scope.go:117] "RemoveContainer" containerID="6135fe59d963240439aa4addd70a0d6a42ba1570a42cc243c74ec00a06f709a4"
	Nov 01 10:47:27 old-k8s-version-245622 kubelet[775]: I1101 10:47:27.861701     775 scope.go:117] "RemoveContainer" containerID="6135fe59d963240439aa4addd70a0d6a42ba1570a42cc243c74ec00a06f709a4"
	Nov 01 10:47:27 old-k8s-version-245622 kubelet[775]: I1101 10:47:27.862014     775 scope.go:117] "RemoveContainer" containerID="08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52"
	Nov 01 10:47:27 old-k8s-version-245622 kubelet[775]: E1101 10:47:27.862286     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4mb2p_kubernetes-dashboard(2c8be916-2557-497f-a083-209059ecd4e4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p" podUID="2c8be916-2557-497f-a083-209059ecd4e4"
	Nov 01 10:47:28 old-k8s-version-245622 kubelet[775]: I1101 10:47:28.866046     775 scope.go:117] "RemoveContainer" containerID="08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52"
	Nov 01 10:47:28 old-k8s-version-245622 kubelet[775]: E1101 10:47:28.867006     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4mb2p_kubernetes-dashboard(2c8be916-2557-497f-a083-209059ecd4e4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p" podUID="2c8be916-2557-497f-a083-209059ecd4e4"
	Nov 01 10:47:31 old-k8s-version-245622 kubelet[775]: I1101 10:47:31.747415     775 scope.go:117] "RemoveContainer" containerID="08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52"
	Nov 01 10:47:31 old-k8s-version-245622 kubelet[775]: E1101 10:47:31.748370     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4mb2p_kubernetes-dashboard(2c8be916-2557-497f-a083-209059ecd4e4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p" podUID="2c8be916-2557-497f-a083-209059ecd4e4"
	Nov 01 10:47:38 old-k8s-version-245622 kubelet[775]: I1101 10:47:38.893882     775 scope.go:117] "RemoveContainer" containerID="03411a1f4b138c8c725a6a4425f3dfb5b56fa9bd5b1cf0ba2d709f16df5fc3ae"
	Nov 01 10:47:38 old-k8s-version-245622 kubelet[775]: I1101 10:47:38.918095     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dwp8b" podStartSLOduration=10.75576953 podCreationTimestamp="2025-11-01 10:47:19 +0000 UTC" firstStartedPulling="2025-11-01 10:47:21.810251786 +0000 UTC m=+19.255574806" lastFinishedPulling="2025-11-01 10:47:30.971757349 +0000 UTC m=+28.417080377" observedRunningTime="2025-11-01 10:47:31.893552917 +0000 UTC m=+29.338875937" watchObservedRunningTime="2025-11-01 10:47:38.917275101 +0000 UTC m=+36.362598129"
	Nov 01 10:47:42 old-k8s-version-245622 kubelet[775]: I1101 10:47:42.682700     775 scope.go:117] "RemoveContainer" containerID="08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52"
	Nov 01 10:47:42 old-k8s-version-245622 kubelet[775]: I1101 10:47:42.907293     775 scope.go:117] "RemoveContainer" containerID="08159f508527edd0181d8b18a3322aba69f0d0ae0d87d65bed48a9f2d1bf4b52"
	Nov 01 10:47:42 old-k8s-version-245622 kubelet[775]: I1101 10:47:42.907609     775 scope.go:117] "RemoveContainer" containerID="475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f"
	Nov 01 10:47:42 old-k8s-version-245622 kubelet[775]: E1101 10:47:42.907964     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4mb2p_kubernetes-dashboard(2c8be916-2557-497f-a083-209059ecd4e4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p" podUID="2c8be916-2557-497f-a083-209059ecd4e4"
	Nov 01 10:47:51 old-k8s-version-245622 kubelet[775]: I1101 10:47:51.747665     775 scope.go:117] "RemoveContainer" containerID="475ddbab4788c5e5c6ecf1a4ede0ed372ea52adfa0cd5df6ca3d62dcd4069c3f"
	Nov 01 10:47:51 old-k8s-version-245622 kubelet[775]: E1101 10:47:51.748006     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4mb2p_kubernetes-dashboard(2c8be916-2557-497f-a083-209059ecd4e4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4mb2p" podUID="2c8be916-2557-497f-a083-209059ecd4e4"
	Nov 01 10:47:55 old-k8s-version-245622 kubelet[775]: I1101 10:47:55.047212     775 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 10:47:55 old-k8s-version-245622 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:47:55 old-k8s-version-245622 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:47:55 old-k8s-version-245622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b9f762a0b850c4519e12cfd7ea375cfaf75618638005b8751293904d0528b27d] <==
	2025/11/01 10:47:31 Using namespace: kubernetes-dashboard
	2025/11/01 10:47:31 Using in-cluster config to connect to apiserver
	2025/11/01 10:47:31 Using secret token for csrf signing
	2025/11/01 10:47:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:47:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:47:31 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 10:47:31 Generating JWE encryption key
	2025/11/01 10:47:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:47:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:47:32 Initializing JWE encryption key from synchronized object
	2025/11/01 10:47:32 Creating in-cluster Sidecar client
	2025/11/01 10:47:32 Serving insecurely on HTTP port: 9090
	2025/11/01 10:47:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:47:31 Starting overwatch
	
	
	==> storage-provisioner [03411a1f4b138c8c725a6a4425f3dfb5b56fa9bd5b1cf0ba2d709f16df5fc3ae] <==
	I1101 10:47:08.410222       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:47:38.412589       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b1a51a80d21f8f7f8ba1d74c8ad7ef2ab7b934b22e7e6b778d90a80c64f1f40c] <==
	I1101 10:47:38.945175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:47:38.958732       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:47:38.958783       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:47:56.359090       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:47:56.361489       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245622_1f1bcbc9-bcc2-463a-b449-2fbe27f5d9ff!
	I1101 10:47:56.363256       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"51e990f1-a0af-4cdb-b36a-ecec58b0ed5a", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-245622_1f1bcbc9-bcc2-463a-b449-2fbe27f5d9ff became leader
	I1101 10:47:56.461866       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245622_1f1bcbc9-bcc2-463a-b449-2fbe27f5d9ff!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-245622 -n old-k8s-version-245622
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-245622 -n old-k8s-version-245622: exit status 2 (389.989874ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-245622 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-014050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-014050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (260.033148ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:49:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-014050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-014050 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-014050 describe deploy/metrics-server -n kube-system: exit status 1 (98.439355ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-014050 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-014050
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-014050:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6",
	        "Created": "2025-11-01T10:48:10.158242588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478551,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:48:10.224658024Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/hostname",
	        "HostsPath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/hosts",
	        "LogPath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6-json.log",
	        "Name": "/default-k8s-diff-port-014050",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-014050:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-014050",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6",
	                "LowerDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-014050",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-014050/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-014050",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-014050",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-014050",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c04ee54632f39a8a3a2cbef9ad308493f9ea130d85d0c08d433ee85217002b9b",
	            "SandboxKey": "/var/run/docker/netns/c04ee54632f3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-014050": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:91:10:e9:4c:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0f438d7bf3e688fe5caa6340faa58ea25b1a6b5b20c8ce821e7570063338cd36",
	                    "EndpointID": "f24cb91860d8e68097939d97e6c487ef8fa35cae7064dba781bd04fa77abecf6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-014050",
	                        "70da30e95fce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-014050 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-014050 logs -n 25: (1.171930987s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cilium-883951                                                                                                                                                                                                                              │ cilium-883951                │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p force-systemd-env-555657 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-555657     │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-946953    │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	│ start   │ -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-946953    │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ delete  │ -p kubernetes-upgrade-946953                                                                                                                                                                                                                  │ kubernetes-upgrade-946953    │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ delete  │ -p force-systemd-env-555657                                                                                                                                                                                                                   │ force-systemd-env-555657     │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p cert-options-186677 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:45 UTC │
	│ ssh     │ cert-options-186677 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ ssh     │ -p cert-options-186677 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ delete  │ -p cert-options-186677                                                                                                                                                                                                                        │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-245622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │                     │
	│ stop    │ -p old-k8s-version-245622 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-245622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:47 UTC │
	│ image   │ old-k8s-version-245622 image list --format=json                                                                                                                                                                                               │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │ 01 Nov 25 10:47 UTC │
	│ pause   │ -p old-k8s-version-245622 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │                     │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p cert-expiration-308600                                                                                                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-014050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:48:52
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:48:52.671237  482008 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:48:52.671478  482008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:48:52.671507  482008 out.go:374] Setting ErrFile to fd 2...
	I1101 10:48:52.671526  482008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:48:52.671847  482008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:48:52.672401  482008 out.go:368] Setting JSON to false
	I1101 10:48:52.673408  482008 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9085,"bootTime":1761985048,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:48:52.673513  482008 start.go:143] virtualization:  
	I1101 10:48:52.679633  482008 out.go:179] * [embed-certs-499088] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:48:52.683215  482008 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:48:52.683279  482008 notify.go:221] Checking for updates...
	I1101 10:48:52.689855  482008 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:48:52.693152  482008 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:48:52.696329  482008 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:48:52.699360  482008 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:48:52.702341  482008 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:48:52.705947  482008 config.go:182] Loaded profile config "default-k8s-diff-port-014050": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:48:52.706100  482008 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:48:52.743004  482008 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:48:52.743150  482008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:48:52.801350  482008 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:48:52.79151551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:48:52.801462  482008 docker.go:319] overlay module found
	I1101 10:48:52.805068  482008 out.go:179] * Using the docker driver based on user configuration
	I1101 10:48:52.808062  482008 start.go:309] selected driver: docker
	I1101 10:48:52.808087  482008 start.go:930] validating driver "docker" against <nil>
	I1101 10:48:52.808104  482008 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:48:52.808907  482008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:48:52.865968  482008 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:48:52.856035033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:48:52.866127  482008 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:48:52.866382  482008 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:48:52.869371  482008 out.go:179] * Using Docker driver with root privileges
	I1101 10:48:52.872322  482008 cni.go:84] Creating CNI manager for ""
	I1101 10:48:52.872382  482008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:48:52.872392  482008 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:48:52.872470  482008 start.go:353] cluster config:
	{Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:48:52.875613  482008 out.go:179] * Starting "embed-certs-499088" primary control-plane node in "embed-certs-499088" cluster
	I1101 10:48:52.878574  482008 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:48:52.881652  482008 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:48:52.884565  482008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:48:52.884620  482008 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:48:52.884632  482008 cache.go:59] Caching tarball of preloaded images
	I1101 10:48:52.884695  482008 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:48:52.884728  482008 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:48:52.884739  482008 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:48:52.884840  482008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/config.json ...
	I1101 10:48:52.884856  482008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/config.json: {Name:mk9d2463e90b9271c84a9dabf3bdbf1dc378dccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:48:52.904847  482008 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:48:52.904870  482008 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:48:52.904888  482008 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:48:52.904911  482008 start.go:360] acquireMachinesLock for embed-certs-499088: {Name:mk5ad922c2d628b6bdeae9b2175ff7077c575607 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:48:52.905037  482008 start.go:364] duration metric: took 82.503µs to acquireMachinesLock for "embed-certs-499088"
	I1101 10:48:52.905070  482008 start.go:93] Provisioning new machine with config: &{Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:48:52.905150  482008 start.go:125] createHost starting for "" (driver="docker")
	W1101 10:48:49.609330  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	W1101 10:48:52.110614  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	I1101 10:48:52.910458  482008 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:48:52.910717  482008 start.go:159] libmachine.API.Create for "embed-certs-499088" (driver="docker")
	I1101 10:48:52.910765  482008 client.go:173] LocalClient.Create starting
	I1101 10:48:52.910843  482008 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem
	I1101 10:48:52.910881  482008 main.go:143] libmachine: Decoding PEM data...
	I1101 10:48:52.910909  482008 main.go:143] libmachine: Parsing certificate...
	I1101 10:48:52.910968  482008 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem
	I1101 10:48:52.910994  482008 main.go:143] libmachine: Decoding PEM data...
	I1101 10:48:52.911007  482008 main.go:143] libmachine: Parsing certificate...
	I1101 10:48:52.911362  482008 cli_runner.go:164] Run: docker network inspect embed-certs-499088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:48:52.928010  482008 cli_runner.go:211] docker network inspect embed-certs-499088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:48:52.928095  482008 network_create.go:284] running [docker network inspect embed-certs-499088] to gather additional debugging logs...
	I1101 10:48:52.928124  482008 cli_runner.go:164] Run: docker network inspect embed-certs-499088
	W1101 10:48:52.944559  482008 cli_runner.go:211] docker network inspect embed-certs-499088 returned with exit code 1
	I1101 10:48:52.944604  482008 network_create.go:287] error running [docker network inspect embed-certs-499088]: docker network inspect embed-certs-499088: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-499088 not found
	I1101 10:48:52.944619  482008 network_create.go:289] output of [docker network inspect embed-certs-499088]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-499088 not found
	
	** /stderr **
	I1101 10:48:52.944713  482008 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:48:52.967377  482008 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5e2665991a3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:25:1a:f9:12:ec} reservation:<nil>}
	I1101 10:48:52.967762  482008 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-adecbbb769f0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:b0:b5:2e:4c:30} reservation:<nil>}
	I1101 10:48:52.968005  482008 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2077d26d1806 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:49:68:b6:9e:fb} reservation:<nil>}
	I1101 10:48:52.968422  482008 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197b190}
	I1101 10:48:52.968439  482008 network_create.go:124] attempt to create docker network embed-certs-499088 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 10:48:52.968504  482008 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-499088 embed-certs-499088
	I1101 10:48:53.042131  482008 network_create.go:108] docker network embed-certs-499088 192.168.76.0/24 created
	I1101 10:48:53.042169  482008 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-499088" container
	I1101 10:48:53.042263  482008 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:48:53.060620  482008 cli_runner.go:164] Run: docker volume create embed-certs-499088 --label name.minikube.sigs.k8s.io=embed-certs-499088 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:48:53.079982  482008 oci.go:103] Successfully created a docker volume embed-certs-499088
	I1101 10:48:53.080173  482008 cli_runner.go:164] Run: docker run --rm --name embed-certs-499088-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-499088 --entrypoint /usr/bin/test -v embed-certs-499088:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:48:53.650023  482008 oci.go:107] Successfully prepared a docker volume embed-certs-499088
	I1101 10:48:53.650089  482008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:48:53.650110  482008 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:48:53.650183  482008 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-499088:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 10:48:54.608969  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	W1101 10:48:56.609261  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	W1101 10:48:59.109398  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	I1101 10:48:58.062272  482008 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-499088:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.412030489s)
	I1101 10:48:58.062308  482008 kic.go:203] duration metric: took 4.412194503s to extract preloaded images to volume ...
	W1101 10:48:58.062447  482008 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:48:58.062559  482008 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:48:58.126314  482008 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-499088 --name embed-certs-499088 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-499088 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-499088 --network embed-certs-499088 --ip 192.168.76.2 --volume embed-certs-499088:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:48:58.467616  482008 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Running}}
	I1101 10:48:58.496651  482008 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:48:58.522408  482008 cli_runner.go:164] Run: docker exec embed-certs-499088 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:48:58.581307  482008 oci.go:144] the created container "embed-certs-499088" has a running status.
	I1101 10:48:58.581333  482008 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa...
	I1101 10:48:59.094710  482008 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:48:59.117787  482008 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:48:59.135509  482008 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:48:59.135530  482008 kic_runner.go:114] Args: [docker exec --privileged embed-certs-499088 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:48:59.175845  482008 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:48:59.194182  482008 machine.go:94] provisionDockerMachine start ...
	I1101 10:48:59.194299  482008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:48:59.216693  482008 main.go:143] libmachine: Using SSH client type: native
	I1101 10:48:59.217112  482008 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1101 10:48:59.217132  482008 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:48:59.217749  482008 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49622->127.0.0.1:33433: read: connection reset by peer
	I1101 10:49:02.368593  482008 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-499088
	
	I1101 10:49:02.368618  482008 ubuntu.go:182] provisioning hostname "embed-certs-499088"
	I1101 10:49:02.368689  482008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:49:02.386809  482008 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:02.387136  482008 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1101 10:49:02.387155  482008 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-499088 && echo "embed-certs-499088" | sudo tee /etc/hostname
	I1101 10:49:02.547227  482008 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-499088
	
	I1101 10:49:02.547302  482008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:49:02.565039  482008 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:02.565355  482008 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1101 10:49:02.565376  482008 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-499088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-499088/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-499088' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1101 10:49:01.608710  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	W1101 10:49:03.609679  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	I1101 10:49:02.713631  482008 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:49:02.713659  482008 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:49:02.713686  482008 ubuntu.go:190] setting up certificates
	I1101 10:49:02.713696  482008 provision.go:84] configureAuth start
	I1101 10:49:02.713755  482008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-499088
	I1101 10:49:02.730859  482008 provision.go:143] copyHostCerts
	I1101 10:49:02.730926  482008 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:49:02.730937  482008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:49:02.731016  482008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:49:02.731117  482008 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:49:02.731126  482008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:49:02.731153  482008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:49:02.731220  482008 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:49:02.731228  482008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:49:02.731252  482008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:49:02.731341  482008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.embed-certs-499088 san=[127.0.0.1 192.168.76.2 embed-certs-499088 localhost minikube]
	I1101 10:49:03.253538  482008 provision.go:177] copyRemoteCerts
	I1101 10:49:03.253618  482008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:49:03.253661  482008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:49:03.273035  482008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:49:03.376798  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:49:03.399053  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:49:03.419919  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:49:03.442447  482008 provision.go:87] duration metric: took 728.728629ms to configureAuth
	I1101 10:49:03.442477  482008 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:49:03.442714  482008 config.go:182] Loaded profile config "embed-certs-499088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:49:03.442822  482008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:49:03.460712  482008 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:03.461105  482008 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1101 10:49:03.461129  482008 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:49:03.734056  482008 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:49:03.734075  482008 machine.go:97] duration metric: took 4.539867146s to provisionDockerMachine
	I1101 10:49:03.734085  482008 client.go:176] duration metric: took 10.82330547s to LocalClient.Create
	I1101 10:49:03.734103  482008 start.go:167] duration metric: took 10.823387752s to libmachine.API.Create "embed-certs-499088"
	I1101 10:49:03.734110  482008 start.go:293] postStartSetup for "embed-certs-499088" (driver="docker")
	I1101 10:49:03.734120  482008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:49:03.734187  482008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:49:03.734230  482008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:49:03.756572  482008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:49:03.865375  482008 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:49:03.868942  482008 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:49:03.869014  482008 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:49:03.869032  482008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:49:03.869097  482008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:49:03.869184  482008 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:49:03.869295  482008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:49:03.876881  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:49:03.896837  482008 start.go:296] duration metric: took 162.710302ms for postStartSetup
	I1101 10:49:03.897311  482008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-499088
	I1101 10:49:03.914073  482008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/config.json ...
	I1101 10:49:03.914364  482008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:49:03.914410  482008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:49:03.931195  482008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:49:04.042512  482008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:49:04.047785  482008 start.go:128] duration metric: took 11.142617039s to createHost
	I1101 10:49:04.047813  482008 start.go:83] releasing machines lock for "embed-certs-499088", held for 11.142759546s
	I1101 10:49:04.047894  482008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-499088
	I1101 10:49:04.067017  482008 ssh_runner.go:195] Run: cat /version.json
	I1101 10:49:04.067374  482008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:49:04.067836  482008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:49:04.067908  482008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:49:04.091033  482008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:49:04.113424  482008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:49:04.196844  482008 ssh_runner.go:195] Run: systemctl --version
	I1101 10:49:04.294778  482008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:49:04.333611  482008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:49:04.338437  482008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:49:04.338559  482008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:49:04.369072  482008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:49:04.369143  482008 start.go:496] detecting cgroup driver to use...
	I1101 10:49:04.369192  482008 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:49:04.369266  482008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:49:04.387575  482008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:49:04.400606  482008 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:49:04.400670  482008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:49:04.418699  482008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:49:04.438661  482008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:49:04.565933  482008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:49:04.708994  482008 docker.go:234] disabling docker service ...
	I1101 10:49:04.709070  482008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:49:04.734597  482008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:49:04.749834  482008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:49:04.878083  482008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:49:05.008487  482008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:49:05.026343  482008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:49:05.043518  482008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:49:05.043639  482008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:05.053065  482008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:49:05.053209  482008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:05.063071  482008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:05.073883  482008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:05.085568  482008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:49:05.095851  482008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:05.106245  482008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:05.124298  482008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:05.133747  482008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:49:05.142199  482008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:49:05.150322  482008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:49:05.267495  482008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:49:05.391124  482008 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:49:05.391260  482008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:49:05.395746  482008 start.go:564] Will wait 60s for crictl version
	I1101 10:49:05.395861  482008 ssh_runner.go:195] Run: which crictl
	I1101 10:49:05.400053  482008 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:49:05.432084  482008 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:49:05.432238  482008 ssh_runner.go:195] Run: crio --version
	I1101 10:49:05.461360  482008 ssh_runner.go:195] Run: crio --version
	I1101 10:49:05.502244  482008 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:49:05.505232  482008 cli_runner.go:164] Run: docker network inspect embed-certs-499088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:49:05.520974  482008 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:49:05.524820  482008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:49:05.535079  482008 kubeadm.go:884] updating cluster {Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:49:05.535195  482008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:49:05.535249  482008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:49:05.569155  482008 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:49:05.569178  482008 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:49:05.569240  482008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:49:05.595965  482008 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:49:05.595991  482008 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:49:05.596002  482008 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:49:05.596088  482008 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-499088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:49:05.596182  482008 ssh_runner.go:195] Run: crio config
	I1101 10:49:05.670635  482008 cni.go:84] Creating CNI manager for ""
	I1101 10:49:05.670662  482008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:49:05.670706  482008 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:49:05.670739  482008 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-499088 NodeName:embed-certs-499088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:49:05.670945  482008 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-499088"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:49:05.671036  482008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:49:05.679626  482008 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:49:05.679703  482008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:49:05.687755  482008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 10:49:05.703062  482008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:49:05.716611  482008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 10:49:05.730248  482008 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:49:05.733847  482008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:49:05.745734  482008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:49:05.864645  482008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:49:05.883067  482008 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088 for IP: 192.168.76.2
	I1101 10:49:05.883089  482008 certs.go:195] generating shared ca certs ...
	I1101 10:49:05.883106  482008 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.883245  482008 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:49:05.883300  482008 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:49:05.883310  482008 certs.go:257] generating profile certs ...
	I1101 10:49:05.883385  482008 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/client.key
	I1101 10:49:05.883402  482008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/client.crt with IP's: []
	I1101 10:49:06.397797  482008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/client.crt ...
	I1101 10:49:06.397827  482008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/client.crt: {Name:mkfdb37880ed2089261abfadd6c18f73aaface38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:06.398029  482008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/client.key ...
	I1101 10:49:06.398044  482008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/client.key: {Name:mkd3fefb50d4f8a06e12b625e14c8a7a5b105da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:06.398138  482008 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.key.ee4ebe0a
	I1101 10:49:06.398157  482008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.crt.ee4ebe0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:49:07.745486  482008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.crt.ee4ebe0a ...
	I1101 10:49:07.745519  482008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.crt.ee4ebe0a: {Name:mk75a7ba86d93cbe90cabc0866d37668d0d38d39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:07.745710  482008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.key.ee4ebe0a ...
	I1101 10:49:07.745726  482008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.key.ee4ebe0a: {Name:mk7f6ae2c8ca87b190761647016228a87653e765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:07.745808  482008 certs.go:382] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.crt.ee4ebe0a -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.crt
	I1101 10:49:07.745900  482008 certs.go:386] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.key.ee4ebe0a -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.key
	I1101 10:49:07.745965  482008 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.key
	I1101 10:49:07.745983  482008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.crt with IP's: []
	I1101 10:49:07.976198  482008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.crt ...
	I1101 10:49:07.976228  482008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.crt: {Name:mk7516283e0450d6f681e2c4251e36932e924b81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:07.976417  482008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.key ...
	I1101 10:49:07.976433  482008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.key: {Name:mkf0325d319f597c56752d5d74d3f6854447bfce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:07.976626  482008 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:49:07.976674  482008 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:49:07.976688  482008 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:49:07.976714  482008 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:49:07.976739  482008 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:49:07.976770  482008 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:49:07.976818  482008 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:49:07.977428  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:49:07.996892  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:49:08.021336  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:49:08.040705  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:49:08.060583  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:49:08.079802  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:49:08.099830  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:49:08.119047  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:49:08.137930  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:49:08.157026  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:49:08.176595  482008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:49:08.197072  482008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:49:08.210770  482008 ssh_runner.go:195] Run: openssl version
	I1101 10:49:08.217644  482008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:49:08.228417  482008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:08.235972  482008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:08.236042  482008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:08.282876  482008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:49:08.291616  482008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:49:08.300013  482008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:49:08.303666  482008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:49:08.303748  482008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:49:08.347108  482008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:49:08.355442  482008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:49:08.363994  482008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:49:08.367829  482008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:49:08.367911  482008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:49:08.409362  482008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:49:08.418305  482008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:49:08.423295  482008 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:49:08.423345  482008 kubeadm.go:401] StartCluster: {Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:49:08.423416  482008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:49:08.423490  482008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:49:08.450836  482008 cri.go:89] found id: ""
	I1101 10:49:08.450904  482008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:49:08.459178  482008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:49:08.467292  482008 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:49:08.467362  482008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:49:08.477634  482008 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:49:08.477656  482008 kubeadm.go:158] found existing configuration files:
	
	I1101 10:49:08.477709  482008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:49:08.487698  482008 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:49:08.487843  482008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:49:08.496575  482008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:49:08.505358  482008 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:49:08.505425  482008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:49:08.513677  482008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:49:08.521670  482008 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:49:08.521757  482008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:49:08.529265  482008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:49:08.537139  482008 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:49:08.537235  482008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:49:08.545336  482008 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:49:08.595188  482008 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:49:08.595498  482008 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:49:08.623621  482008 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:49:08.623701  482008 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:49:08.623750  482008 kubeadm.go:319] OS: Linux
	I1101 10:49:08.623802  482008 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:49:08.623864  482008 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:49:08.623918  482008 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:49:08.623973  482008 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:49:08.624027  482008 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:49:08.624082  482008 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:49:08.624133  482008 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:49:08.624196  482008 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:49:08.624248  482008 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:49:08.694573  482008 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:49:08.694692  482008 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:49:08.694794  482008 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:49:08.703231  482008 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1101 10:49:06.109058  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	W1101 10:49:08.110158  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	I1101 10:49:08.709220  482008 out.go:252]   - Generating certificates and keys ...
	I1101 10:49:08.709398  482008 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:49:08.709513  482008 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:49:09.148775  482008 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:49:09.600084  482008 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:49:09.757983  482008 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:49:09.913484  482008 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:49:10.300171  482008 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:49:10.300622  482008 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-499088 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:49:11.998221  482008 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:49:11.998602  482008 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-499088 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:49:12.225929  482008 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:49:12.405895  482008 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	W1101 10:49:10.609022  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	W1101 10:49:12.609156  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	I1101 10:49:13.575056  482008 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:49:13.575388  482008 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:49:13.643409  482008 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:49:13.983054  482008 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:49:14.243384  482008 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:49:14.790060  482008 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:49:14.935591  482008 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:49:14.936281  482008 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:49:14.941863  482008 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:49:14.945248  482008 out.go:252]   - Booting up control plane ...
	I1101 10:49:14.945359  482008 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:49:14.945442  482008 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:49:14.945512  482008 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:49:14.981514  482008 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:49:14.981642  482008 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:49:14.989139  482008 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:49:14.989490  482008 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:49:14.989540  482008 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:49:15.163154  482008 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:49:15.163279  482008 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:49:17.164242  482008 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.00170703s
	I1101 10:49:17.167660  482008 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:49:17.167766  482008 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 10:49:17.168080  482008 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:49:17.168172  482008 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 10:49:14.609988  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	W1101 10:49:17.109237  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	I1101 10:49:19.314932  482008 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.146717357s
	I1101 10:49:22.057761  482008 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.889991641s
	I1101 10:49:24.171416  482008 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003616398s
	I1101 10:49:24.196857  482008 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:49:24.213747  482008 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:49:24.230630  482008 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:49:24.230888  482008 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-499088 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:49:24.246696  482008 kubeadm.go:319] [bootstrap-token] Using token: ajjskh.a4dqq0ma67d1ghvw
	W1101 10:49:19.609402  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	W1101 10:49:22.109558  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	W1101 10:49:24.109685  478164 node_ready.go:57] node "default-k8s-diff-port-014050" has "Ready":"False" status (will retry)
	I1101 10:49:24.249565  482008 out.go:252]   - Configuring RBAC rules ...
	I1101 10:49:24.249698  482008 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:49:24.254777  482008 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:49:24.262976  482008 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:49:24.268191  482008 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:49:24.274538  482008 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:49:24.279085  482008 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:49:24.581236  482008 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:49:25.153392  482008 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:49:25.583484  482008 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:49:25.584860  482008 kubeadm.go:319] 
	I1101 10:49:25.584960  482008 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:49:25.584972  482008 kubeadm.go:319] 
	I1101 10:49:25.585053  482008 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:49:25.585063  482008 kubeadm.go:319] 
	I1101 10:49:25.585090  482008 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:49:25.585163  482008 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:49:25.585221  482008 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:49:25.585230  482008 kubeadm.go:319] 
	I1101 10:49:25.585287  482008 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:49:25.585296  482008 kubeadm.go:319] 
	I1101 10:49:25.585346  482008 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:49:25.585355  482008 kubeadm.go:319] 
	I1101 10:49:25.585410  482008 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:49:25.585492  482008 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:49:25.585568  482008 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:49:25.585576  482008 kubeadm.go:319] 
	I1101 10:49:25.585665  482008 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:49:25.585749  482008 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:49:25.585759  482008 kubeadm.go:319] 
	I1101 10:49:25.585847  482008 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ajjskh.a4dqq0ma67d1ghvw \
	I1101 10:49:25.585959  482008 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 \
	I1101 10:49:25.585985  482008 kubeadm.go:319] 	--control-plane 
	I1101 10:49:25.585994  482008 kubeadm.go:319] 
	I1101 10:49:25.586083  482008 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:49:25.586092  482008 kubeadm.go:319] 
	I1101 10:49:25.586180  482008 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ajjskh.a4dqq0ma67d1ghvw \
	I1101 10:49:25.586296  482008 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 
	I1101 10:49:25.592812  482008 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:49:25.593102  482008 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:49:25.593239  482008 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:49:25.593283  482008 cni.go:84] Creating CNI manager for ""
	I1101 10:49:25.593296  482008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:49:25.596597  482008 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:49:25.599682  482008 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:49:25.605468  482008 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:49:25.605492  482008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:49:25.627739  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:49:25.984682  482008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:49:25.984827  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:25.984912  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-499088 minikube.k8s.io/updated_at=2025_11_01T10_49_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=embed-certs-499088 minikube.k8s.io/primary=true
	I1101 10:49:26.142090  482008 ops.go:34] apiserver oom_adj: -16
	I1101 10:49:26.142265  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:26.642943  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:27.142662  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:27.642754  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:25.147442  478164 node_ready.go:49] node "default-k8s-diff-port-014050" is "Ready"
	I1101 10:49:25.147522  478164 node_ready.go:38] duration metric: took 39.541846616s for node "default-k8s-diff-port-014050" to be "Ready" ...
	I1101 10:49:25.147537  478164 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:49:25.147606  478164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:49:25.173986  478164 api_server.go:72] duration metric: took 41.92353916s to wait for apiserver process to appear ...
	I1101 10:49:25.174028  478164 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:49:25.174049  478164 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1101 10:49:25.192307  478164 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1101 10:49:25.193787  478164 api_server.go:141] control plane version: v1.34.1
	I1101 10:49:25.193822  478164 api_server.go:131] duration metric: took 19.782865ms to wait for apiserver health ...
	I1101 10:49:25.193833  478164 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:49:25.204135  478164 system_pods.go:59] 8 kube-system pods found
	I1101 10:49:25.204179  478164 system_pods.go:61] "coredns-66bc5c9577-cs5l2" [7b7eb708-3da6-4cad-ac28-f540c6024c62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:49:25.204186  478164 system_pods.go:61] "etcd-default-k8s-diff-port-014050" [ff74ab50-5145-4008-b755-f225069f6886] Running
	I1101 10:49:25.204193  478164 system_pods.go:61] "kindnet-j2vhl" [f4616783-98b7-4d54-b6b4-9f4b8bb30786] Running
	I1101 10:49:25.204198  478164 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-014050" [239a8d41-fbe2-4033-af92-c65be32b02a4] Running
	I1101 10:49:25.204203  478164 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-014050" [7f8d6010-1290-47f3-90fc-1691db840658] Running
	I1101 10:49:25.204209  478164 system_pods.go:61] "kube-proxy-jhf2k" [c34f672d-ef6a-48f1-bd77-63fac4364e78] Running
	I1101 10:49:25.204218  478164 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-014050" [8fc96498-e77c-4641-93ff-499959f9b8b3] Running
	I1101 10:49:25.204222  478164 system_pods.go:61] "storage-provisioner" [faa93d67-48d9-4840-9a3c-57ffb8b81d04] Pending
	I1101 10:49:25.204235  478164 system_pods.go:74] duration metric: took 10.39666ms to wait for pod list to return data ...
	I1101 10:49:25.204244  478164 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:49:25.210379  478164 default_sa.go:45] found service account: "default"
	I1101 10:49:25.210407  478164 default_sa.go:55] duration metric: took 6.1509ms for default service account to be created ...
	I1101 10:49:25.210417  478164 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:49:25.219990  478164 system_pods.go:86] 8 kube-system pods found
	I1101 10:49:25.220028  478164 system_pods.go:89] "coredns-66bc5c9577-cs5l2" [7b7eb708-3da6-4cad-ac28-f540c6024c62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:49:25.220035  478164 system_pods.go:89] "etcd-default-k8s-diff-port-014050" [ff74ab50-5145-4008-b755-f225069f6886] Running
	I1101 10:49:25.220042  478164 system_pods.go:89] "kindnet-j2vhl" [f4616783-98b7-4d54-b6b4-9f4b8bb30786] Running
	I1101 10:49:25.220048  478164 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-014050" [239a8d41-fbe2-4033-af92-c65be32b02a4] Running
	I1101 10:49:25.220058  478164 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-014050" [7f8d6010-1290-47f3-90fc-1691db840658] Running
	I1101 10:49:25.220068  478164 system_pods.go:89] "kube-proxy-jhf2k" [c34f672d-ef6a-48f1-bd77-63fac4364e78] Running
	I1101 10:49:25.220072  478164 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-014050" [8fc96498-e77c-4641-93ff-499959f9b8b3] Running
	I1101 10:49:25.220084  478164 system_pods.go:89] "storage-provisioner" [faa93d67-48d9-4840-9a3c-57ffb8b81d04] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:49:25.220109  478164 retry.go:31] will retry after 209.874891ms: missing components: kube-dns
	I1101 10:49:25.435614  478164 system_pods.go:86] 8 kube-system pods found
	I1101 10:49:25.435654  478164 system_pods.go:89] "coredns-66bc5c9577-cs5l2" [7b7eb708-3da6-4cad-ac28-f540c6024c62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:49:25.435662  478164 system_pods.go:89] "etcd-default-k8s-diff-port-014050" [ff74ab50-5145-4008-b755-f225069f6886] Running
	I1101 10:49:25.435669  478164 system_pods.go:89] "kindnet-j2vhl" [f4616783-98b7-4d54-b6b4-9f4b8bb30786] Running
	I1101 10:49:25.435674  478164 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-014050" [239a8d41-fbe2-4033-af92-c65be32b02a4] Running
	I1101 10:49:25.435678  478164 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-014050" [7f8d6010-1290-47f3-90fc-1691db840658] Running
	I1101 10:49:25.435683  478164 system_pods.go:89] "kube-proxy-jhf2k" [c34f672d-ef6a-48f1-bd77-63fac4364e78] Running
	I1101 10:49:25.435687  478164 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-014050" [8fc96498-e77c-4641-93ff-499959f9b8b3] Running
	I1101 10:49:25.435693  478164 system_pods.go:89] "storage-provisioner" [faa93d67-48d9-4840-9a3c-57ffb8b81d04] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:49:25.435723  478164 retry.go:31] will retry after 368.371754ms: missing components: kube-dns
	I1101 10:49:25.809470  478164 system_pods.go:86] 8 kube-system pods found
	I1101 10:49:25.809509  478164 system_pods.go:89] "coredns-66bc5c9577-cs5l2" [7b7eb708-3da6-4cad-ac28-f540c6024c62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:49:25.809517  478164 system_pods.go:89] "etcd-default-k8s-diff-port-014050" [ff74ab50-5145-4008-b755-f225069f6886] Running
	I1101 10:49:25.809523  478164 system_pods.go:89] "kindnet-j2vhl" [f4616783-98b7-4d54-b6b4-9f4b8bb30786] Running
	I1101 10:49:25.809528  478164 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-014050" [239a8d41-fbe2-4033-af92-c65be32b02a4] Running
	I1101 10:49:25.809532  478164 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-014050" [7f8d6010-1290-47f3-90fc-1691db840658] Running
	I1101 10:49:25.809536  478164 system_pods.go:89] "kube-proxy-jhf2k" [c34f672d-ef6a-48f1-bd77-63fac4364e78] Running
	I1101 10:49:25.809540  478164 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-014050" [8fc96498-e77c-4641-93ff-499959f9b8b3] Running
	I1101 10:49:25.809547  478164 system_pods.go:89] "storage-provisioner" [faa93d67-48d9-4840-9a3c-57ffb8b81d04] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:49:25.809562  478164 retry.go:31] will retry after 429.764689ms: missing components: kube-dns
	I1101 10:49:26.244589  478164 system_pods.go:86] 8 kube-system pods found
	I1101 10:49:26.244629  478164 system_pods.go:89] "coredns-66bc5c9577-cs5l2" [7b7eb708-3da6-4cad-ac28-f540c6024c62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:49:26.244638  478164 system_pods.go:89] "etcd-default-k8s-diff-port-014050" [ff74ab50-5145-4008-b755-f225069f6886] Running
	I1101 10:49:26.244645  478164 system_pods.go:89] "kindnet-j2vhl" [f4616783-98b7-4d54-b6b4-9f4b8bb30786] Running
	I1101 10:49:26.244649  478164 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-014050" [239a8d41-fbe2-4033-af92-c65be32b02a4] Running
	I1101 10:49:26.244653  478164 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-014050" [7f8d6010-1290-47f3-90fc-1691db840658] Running
	I1101 10:49:26.244657  478164 system_pods.go:89] "kube-proxy-jhf2k" [c34f672d-ef6a-48f1-bd77-63fac4364e78] Running
	I1101 10:49:26.244661  478164 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-014050" [8fc96498-e77c-4641-93ff-499959f9b8b3] Running
	I1101 10:49:26.244667  478164 system_pods.go:89] "storage-provisioner" [faa93d67-48d9-4840-9a3c-57ffb8b81d04] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:49:26.244684  478164 retry.go:31] will retry after 407.216958ms: missing components: kube-dns
	I1101 10:49:26.660304  478164 system_pods.go:86] 8 kube-system pods found
	I1101 10:49:26.660340  478164 system_pods.go:89] "coredns-66bc5c9577-cs5l2" [7b7eb708-3da6-4cad-ac28-f540c6024c62] Running
	I1101 10:49:26.660347  478164 system_pods.go:89] "etcd-default-k8s-diff-port-014050" [ff74ab50-5145-4008-b755-f225069f6886] Running
	I1101 10:49:26.660357  478164 system_pods.go:89] "kindnet-j2vhl" [f4616783-98b7-4d54-b6b4-9f4b8bb30786] Running
	I1101 10:49:26.660361  478164 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-014050" [239a8d41-fbe2-4033-af92-c65be32b02a4] Running
	I1101 10:49:26.660365  478164 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-014050" [7f8d6010-1290-47f3-90fc-1691db840658] Running
	I1101 10:49:26.660369  478164 system_pods.go:89] "kube-proxy-jhf2k" [c34f672d-ef6a-48f1-bd77-63fac4364e78] Running
	I1101 10:49:26.660373  478164 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-014050" [8fc96498-e77c-4641-93ff-499959f9b8b3] Running
	I1101 10:49:26.660378  478164 system_pods.go:89] "storage-provisioner" [faa93d67-48d9-4840-9a3c-57ffb8b81d04] Running
	I1101 10:49:26.660387  478164 system_pods.go:126] duration metric: took 1.44996355s to wait for k8s-apps to be running ...
	I1101 10:49:26.660402  478164 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:49:26.660458  478164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:49:26.678111  478164 system_svc.go:56] duration metric: took 17.69985ms WaitForService to wait for kubelet
	I1101 10:49:26.678143  478164 kubeadm.go:587] duration metric: took 43.427701905s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:49:26.678163  478164 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:49:26.682321  478164 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:49:26.682359  478164 node_conditions.go:123] node cpu capacity is 2
	I1101 10:49:26.682372  478164 node_conditions.go:105] duration metric: took 4.203387ms to run NodePressure ...
	I1101 10:49:26.682385  478164 start.go:242] waiting for startup goroutines ...
	I1101 10:49:26.682397  478164 start.go:247] waiting for cluster config update ...
	I1101 10:49:26.682408  478164 start.go:256] writing updated cluster config ...
	I1101 10:49:26.682699  478164 ssh_runner.go:195] Run: rm -f paused
	I1101 10:49:26.687313  478164 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:49:26.691460  478164 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cs5l2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:26.697705  478164 pod_ready.go:94] pod "coredns-66bc5c9577-cs5l2" is "Ready"
	I1101 10:49:26.697742  478164 pod_ready.go:86] duration metric: took 6.241117ms for pod "coredns-66bc5c9577-cs5l2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:26.700776  478164 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:26.705698  478164 pod_ready.go:94] pod "etcd-default-k8s-diff-port-014050" is "Ready"
	I1101 10:49:26.705724  478164 pod_ready.go:86] duration metric: took 4.914319ms for pod "etcd-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:26.708338  478164 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:26.726250  478164 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-014050" is "Ready"
	I1101 10:49:26.726277  478164 pod_ready.go:86] duration metric: took 17.903782ms for pod "kube-apiserver-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:26.731336  478164 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:27.091849  478164 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-014050" is "Ready"
	I1101 10:49:27.091879  478164 pod_ready.go:86] duration metric: took 360.518727ms for pod "kube-controller-manager-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:27.292338  478164 pod_ready.go:83] waiting for pod "kube-proxy-jhf2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:27.692072  478164 pod_ready.go:94] pod "kube-proxy-jhf2k" is "Ready"
	I1101 10:49:27.692142  478164 pod_ready.go:86] duration metric: took 399.731353ms for pod "kube-proxy-jhf2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:27.891820  478164 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:28.291101  478164 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-014050" is "Ready"
	I1101 10:49:28.291138  478164 pod_ready.go:86] duration metric: took 399.240279ms for pod "kube-scheduler-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:49:28.291151  478164 pod_ready.go:40] duration metric: took 1.603805549s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:49:28.352678  478164 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:49:28.356102  478164 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-014050" cluster and "default" namespace by default
	I1101 10:49:28.143265  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:28.642548  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:29.143100  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:29.642954  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:30.143234  482008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:30.249059  482008 kubeadm.go:1114] duration metric: took 4.264282927s to wait for elevateKubeSystemPrivileges
	I1101 10:49:30.249087  482008 kubeadm.go:403] duration metric: took 21.825745427s to StartCluster
	I1101 10:49:30.249115  482008 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:30.249184  482008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:49:30.250557  482008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:30.250796  482008 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:49:30.250978  482008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:49:30.251270  482008 config.go:182] Loaded profile config "embed-certs-499088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:49:30.251317  482008 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:49:30.251392  482008 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-499088"
	I1101 10:49:30.251408  482008 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-499088"
	I1101 10:49:30.251433  482008 host.go:66] Checking if "embed-certs-499088" exists ...
	I1101 10:49:30.251947  482008 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:49:30.252460  482008 addons.go:70] Setting default-storageclass=true in profile "embed-certs-499088"
	I1101 10:49:30.252483  482008 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-499088"
	I1101 10:49:30.252791  482008 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:49:30.254585  482008 out.go:179] * Verifying Kubernetes components...
	I1101 10:49:30.257546  482008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:49:30.290242  482008 addons.go:239] Setting addon default-storageclass=true in "embed-certs-499088"
	I1101 10:49:30.290287  482008 host.go:66] Checking if "embed-certs-499088" exists ...
	I1101 10:49:30.290740  482008 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:49:30.301741  482008 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:49:30.304607  482008 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:49:30.304643  482008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:49:30.304718  482008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:49:30.333480  482008 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:49:30.333502  482008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:49:30.333570  482008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:49:30.353075  482008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:49:30.381233  482008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:49:30.492479  482008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:49:30.564495  482008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:49:30.602956  482008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:49:30.769578  482008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:49:31.209767  482008 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 10:49:31.210959  482008 node_ready.go:35] waiting up to 6m0s for node "embed-certs-499088" to be "Ready" ...
	I1101 10:49:31.597334  482008 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:49:31.600257  482008 addons.go:515] duration metric: took 1.348922952s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:49:31.715777  482008 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-499088" context rescaled to 1 replicas
	W1101 10:49:33.215315  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	W1101 10:49:35.215873  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 10:49:25 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:25.525453398Z" level=info msg="Created container c098470803591ce297fa9787d99ed10694efdc2985d2a6c427e2be5e74373f97: kube-system/coredns-66bc5c9577-cs5l2/coredns" id=eb07d2e5-d888-463d-8411-64e3a8adb5d2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:49:25 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:25.53434766Z" level=info msg="Starting container: c098470803591ce297fa9787d99ed10694efdc2985d2a6c427e2be5e74373f97" id=81b203ef-b58d-4834-aeac-cbff59a70bf7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:49:25 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:25.540885485Z" level=info msg="Started container" PID=1733 containerID=c098470803591ce297fa9787d99ed10694efdc2985d2a6c427e2be5e74373f97 description=kube-system/coredns-66bc5c9577-cs5l2/coredns id=81b203ef-b58d-4834-aeac-cbff59a70bf7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=432096fae9e3ef1499078843c35aa26cf5372e1cdca345a56316d04b5b005959
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.900215381Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d3a5c7c4-2919-4e58-925a-bad824def6ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.900441918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.924646026Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:83c929970bc1c89777257a6de1296f1bc8f31757fc8b14dcca72d3548bbad846 UID:98a75dc0-f396-4705-a6f4-5d99adc472af NetNS:/var/run/netns/d4381723-b3f3-4395-aa7a-180d5729ade3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000ec2a0}] Aliases:map[]}"
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.924700714Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.93924252Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:83c929970bc1c89777257a6de1296f1bc8f31757fc8b14dcca72d3548bbad846 UID:98a75dc0-f396-4705-a6f4-5d99adc472af NetNS:/var/run/netns/d4381723-b3f3-4395-aa7a-180d5729ade3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000ec2a0}] Aliases:map[]}"
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.939403842Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.945526287Z" level=info msg="Ran pod sandbox 83c929970bc1c89777257a6de1296f1bc8f31757fc8b14dcca72d3548bbad846 with infra container: default/busybox/POD" id=d3a5c7c4-2919-4e58-925a-bad824def6ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.948205705Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=87651765-09d5-4f2b-a8f1-40e24e44115a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.948343906Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=87651765-09d5-4f2b-a8f1-40e24e44115a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.948383668Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=87651765-09d5-4f2b-a8f1-40e24e44115a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.951530955Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=033b0b57-38e3-4380-8031-cca08f3ee5bf name=/runtime.v1.ImageService/PullImage
	Nov 01 10:49:28 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:28.965700524Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:49:31 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:31.273341541Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=033b0b57-38e3-4380-8031-cca08f3ee5bf name=/runtime.v1.ImageService/PullImage
	Nov 01 10:49:31 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:31.274294765Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4a6f1a41-cbde-44c4-9991-892ab12e9def name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:49:31 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:31.27836334Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c4ef0fc6-bf52-43ef-8388-e28c805d3117 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:49:31 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:31.289156213Z" level=info msg="Creating container: default/busybox/busybox" id=fa22aee0-854d-42d3-8a12-3255096a9b27 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:49:31 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:31.289416613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:49:31 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:31.296154399Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:49:31 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:31.296848871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:49:31 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:31.329775137Z" level=info msg="Created container e656dfce8b9c74712cc03750356909c45596c5dc96d94be224a20e95b51f01ad: default/busybox/busybox" id=fa22aee0-854d-42d3-8a12-3255096a9b27 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:49:31 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:31.331651881Z" level=info msg="Starting container: e656dfce8b9c74712cc03750356909c45596c5dc96d94be224a20e95b51f01ad" id=01d42635-06a0-4f4c-aa7e-94bcba3329df name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:49:31 default-k8s-diff-port-014050 crio[836]: time="2025-11-01T10:49:31.334599617Z" level=info msg="Started container" PID=1794 containerID=e656dfce8b9c74712cc03750356909c45596c5dc96d94be224a20e95b51f01ad description=default/busybox/busybox id=01d42635-06a0-4f4c-aa7e-94bcba3329df name=/runtime.v1.RuntimeService/StartContainer sandboxID=83c929970bc1c89777257a6de1296f1bc8f31757fc8b14dcca72d3548bbad846
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	e656dfce8b9c7       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   83c929970bc1c       busybox                                                default
	c098470803591       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   432096fae9e3e       coredns-66bc5c9577-cs5l2                               kube-system
	df0c8f53f6e99       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   b132c5fde5782       storage-provisioner                                    kube-system
	9dbd8092ce2d2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   4ed66d89f56cb       kube-proxy-jhf2k                                       kube-system
	607da57f28845       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   1b8b37a3e0e36       kindnet-j2vhl                                          kube-system
	52c29f5423d37       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   61f27f8f8511e       etcd-default-k8s-diff-port-014050                      kube-system
	fe0e3759527c7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   c4c5d135b19cc       kube-scheduler-default-k8s-diff-port-014050            kube-system
	ebf662d62a2fc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   09e28bb9204ea       kube-controller-manager-default-k8s-diff-port-014050   kube-system
	04b92942b8a87       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   e8e28e7b17870       kube-apiserver-default-k8s-diff-port-014050            kube-system
	
	
	==> coredns [c098470803591ce297fa9787d99ed10694efdc2985d2a6c427e2be5e74373f97] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37041 - 5137 "HINFO IN 12530627322683163.8119674210490600352. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.00617061s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-014050
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-014050
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=default-k8s-diff-port-014050
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_48_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:48:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-014050
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:49:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:49:24 +0000   Sat, 01 Nov 2025 10:48:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:49:24 +0000   Sat, 01 Nov 2025 10:48:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:49:24 +0000   Sat, 01 Nov 2025 10:48:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:49:24 +0000   Sat, 01 Nov 2025 10:49:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-014050
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                afada185-3889-484f-a7d8-6b092f3a288a
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-cs5l2                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-014050                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-j2vhl                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-014050             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-014050    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-jhf2k                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-014050             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-014050 event: Registered Node default-k8s-diff-port-014050 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-014050 status is now: NodeReady
	
	
	==> dmesg <==
	[ +37.261841] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [52c29f5423d37921fe6d497a1fc1c9b152dab76a189af82c5fe2654f731acb34] <==
	{"level":"warn","ts":"2025-11-01T10:48:33.527167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.542640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.562983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.577867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.594785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.615599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.628860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.651368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.665022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.699198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.717408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.747748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.777311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.781717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.805904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.816817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.835639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.858185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.868450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.885790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.910270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.948852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.964681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:33.982230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:48:34.081641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49060","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:49:39 up  2:32,  0 user,  load average: 4.32, 3.50, 2.80
	Linux default-k8s-diff-port-014050 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [607da57f288456963154bb67407edd0dd791239897e313b35f974057ef351a1e] <==
	I1101 10:48:44.447320       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:48:44.455759       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:48:44.455902       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:48:44.455914       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:48:44.455926       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:48:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:48:44.674462       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:48:44.674495       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:48:44.674509       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:48:44.674808       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:49:14.674674       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:49:14.674951       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:49:14.675078       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 10:49:14.675168       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1101 10:49:15.775480       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:49:15.775526       1 metrics.go:72] Registering metrics
	I1101 10:49:15.775593       1 controller.go:711] "Syncing nftables rules"
	I1101 10:49:24.681082       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:49:24.681125       1 main.go:301] handling current node
	I1101 10:49:34.673844       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:49:34.673889       1 main.go:301] handling current node
	
	
	==> kube-apiserver [04b92942b8a87f5bea4bb6f4e2eb2d61486f011874abc1a65494c939d1e148f1] <==
	I1101 10:48:34.957209       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:48:34.957642       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:48:34.957690       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:48:34.957720       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:48:34.984744       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:48:34.984880       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:48:35.000634       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:48:35.027001       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:48:35.702573       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:48:35.711220       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:48:35.711243       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:48:36.653428       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:48:36.728342       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:48:36.836481       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:48:36.873574       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:48:36.890928       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 10:48:36.894557       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:48:36.902185       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:48:38.061379       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:48:38.093617       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:48:38.115231       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:48:43.075104       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:48:43.084631       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:48:43.198659       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:48:43.232645       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ebf662d62a2fccc75381483f24461897c50ae40694f5449c8e6a8a0915b7686f] <==
	I1101 10:48:42.695353       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:48:42.695419       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:48:42.695434       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:48:42.780626       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:48:42.780708       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:48:42.695448       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:48:42.695498       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:48:42.695851       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:48:42.705722       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:48:42.801233       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:48:42.801329       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-014050"
	I1101 10:48:42.801374       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:48:42.729585       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:48:42.705765       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:48:42.705814       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:48:42.729107       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:48:42.729129       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:48:42.729555       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:48:42.729628       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:48:42.802494       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:48:42.802501       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:48:42.820831       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:48:42.841046       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:48:42.841068       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:49:27.808679       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9dbd8092ce2d2b73a335cb0970065727bd542c7281477c4eb32945296ab6642b] <==
	I1101 10:48:45.855450       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:48:45.950598       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:48:46.061641       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:48:46.061768       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:48:46.061889       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:48:46.104033       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:48:46.104155       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:48:46.119255       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:48:46.119581       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:48:46.119603       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:48:46.132142       1 config.go:200] "Starting service config controller"
	I1101 10:48:46.132230       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:48:46.132275       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:48:46.132303       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:48:46.132337       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:48:46.132365       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:48:46.134885       1 config.go:309] "Starting node config controller"
	I1101 10:48:46.135967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:48:46.136045       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:48:46.233366       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:48:46.233454       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:48:46.233495       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fe0e3759527c7beef49513cf684ad8ad2f1620c393ababeb8a4d1a7651171724] <==
	E1101 10:48:34.996916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:48:34.997024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:48:34.997074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:48:34.997114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:48:34.997165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:48:34.997208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:48:34.997244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:48:34.997370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:48:34.997406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:48:34.997443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:48:34.997477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:48:35.001299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:48:35.001563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:48:35.868808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:48:35.961170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:48:36.049569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:48:36.054157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:48:36.076814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:48:36.124353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:48:36.138453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:48:36.140236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:48:36.219890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:48:36.226033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:48:36.250258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1101 10:48:38.577142       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:48:43 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:48:43.626522    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f4616783-98b7-4d54-b6b4-9f4b8bb30786-cni-cfg\") pod \"kindnet-j2vhl\" (UID: \"f4616783-98b7-4d54-b6b4-9f4b8bb30786\") " pod="kube-system/kindnet-j2vhl"
	Nov 01 10:48:43 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:48:43.626544    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4616783-98b7-4d54-b6b4-9f4b8bb30786-lib-modules\") pod \"kindnet-j2vhl\" (UID: \"f4616783-98b7-4d54-b6b4-9f4b8bb30786\") " pod="kube-system/kindnet-j2vhl"
	Nov 01 10:48:43 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:48:43.626561    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4616783-98b7-4d54-b6b4-9f4b8bb30786-xtables-lock\") pod \"kindnet-j2vhl\" (UID: \"f4616783-98b7-4d54-b6b4-9f4b8bb30786\") " pod="kube-system/kindnet-j2vhl"
	Nov 01 10:48:43 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:48:43.832774    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snz5q\" (UniqueName: \"kubernetes.io/projected/c34f672d-ef6a-48f1-bd77-63fac4364e78-kube-api-access-snz5q\") pod \"kube-proxy-jhf2k\" (UID: \"c34f672d-ef6a-48f1-bd77-63fac4364e78\") " pod="kube-system/kube-proxy-jhf2k"
	Nov 01 10:48:43 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:48:43.832829    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c34f672d-ef6a-48f1-bd77-63fac4364e78-kube-proxy\") pod \"kube-proxy-jhf2k\" (UID: \"c34f672d-ef6a-48f1-bd77-63fac4364e78\") " pod="kube-system/kube-proxy-jhf2k"
	Nov 01 10:48:43 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:48:43.832852    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c34f672d-ef6a-48f1-bd77-63fac4364e78-xtables-lock\") pod \"kube-proxy-jhf2k\" (UID: \"c34f672d-ef6a-48f1-bd77-63fac4364e78\") " pod="kube-system/kube-proxy-jhf2k"
	Nov 01 10:48:43 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:48:43.832878    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c34f672d-ef6a-48f1-bd77-63fac4364e78-lib-modules\") pod \"kube-proxy-jhf2k\" (UID: \"c34f672d-ef6a-48f1-bd77-63fac4364e78\") " pod="kube-system/kube-proxy-jhf2k"
	Nov 01 10:48:43 default-k8s-diff-port-014050 kubelet[1311]: E1101 10:48:43.846479    1311 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:default-k8s-diff-port-014050\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-014050' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 01 10:48:43 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:48:43.936090    1311 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:48:44 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:48:44.499526    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-j2vhl" podStartSLOduration=1.499507855 podStartE2EDuration="1.499507855s" podCreationTimestamp="2025-11-01 10:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:48:44.499242499 +0000 UTC m=+6.548996526" watchObservedRunningTime="2025-11-01 10:48:44.499507855 +0000 UTC m=+6.549261882"
	Nov 01 10:48:44 default-k8s-diff-port-014050 kubelet[1311]: E1101 10:48:44.938414    1311 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:48:44 default-k8s-diff-port-014050 kubelet[1311]: E1101 10:48:44.938529    1311 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c34f672d-ef6a-48f1-bd77-63fac4364e78-kube-proxy podName:c34f672d-ef6a-48f1-bd77-63fac4364e78 nodeName:}" failed. No retries permitted until 2025-11-01 10:48:45.438502365 +0000 UTC m=+7.488256391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c34f672d-ef6a-48f1-bd77-63fac4364e78-kube-proxy") pod "kube-proxy-jhf2k" (UID: "c34f672d-ef6a-48f1-bd77-63fac4364e78") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:48:45 default-k8s-diff-port-014050 kubelet[1311]: W1101 10:48:45.628246    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/crio-4ed66d89f56cbd2dce8409db00b8edee51504ffd60301161bd7c2340cef3894b WatchSource:0}: Error finding container 4ed66d89f56cbd2dce8409db00b8edee51504ffd60301161bd7c2340cef3894b: Status 404 returned error can't find the container with id 4ed66d89f56cbd2dce8409db00b8edee51504ffd60301161bd7c2340cef3894b
	Nov 01 10:48:46 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:48:46.464397    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jhf2k" podStartSLOduration=3.464380597 podStartE2EDuration="3.464380597s" podCreationTimestamp="2025-11-01 10:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:48:46.443062768 +0000 UTC m=+8.492816795" watchObservedRunningTime="2025-11-01 10:48:46.464380597 +0000 UTC m=+8.514134624"
	Nov 01 10:49:24 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:49:24.969173    1311 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:49:25 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:49:25.284174    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9nxr\" (UniqueName: \"kubernetes.io/projected/faa93d67-48d9-4840-9a3c-57ffb8b81d04-kube-api-access-h9nxr\") pod \"storage-provisioner\" (UID: \"faa93d67-48d9-4840-9a3c-57ffb8b81d04\") " pod="kube-system/storage-provisioner"
	Nov 01 10:49:25 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:49:25.284391    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/faa93d67-48d9-4840-9a3c-57ffb8b81d04-tmp\") pod \"storage-provisioner\" (UID: \"faa93d67-48d9-4840-9a3c-57ffb8b81d04\") " pod="kube-system/storage-provisioner"
	Nov 01 10:49:25 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:49:25.284474    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b7eb708-3da6-4cad-ac28-f540c6024c62-config-volume\") pod \"coredns-66bc5c9577-cs5l2\" (UID: \"7b7eb708-3da6-4cad-ac28-f540c6024c62\") " pod="kube-system/coredns-66bc5c9577-cs5l2"
	Nov 01 10:49:25 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:49:25.284552    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqv5c\" (UniqueName: \"kubernetes.io/projected/7b7eb708-3da6-4cad-ac28-f540c6024c62-kube-api-access-dqv5c\") pod \"coredns-66bc5c9577-cs5l2\" (UID: \"7b7eb708-3da6-4cad-ac28-f540c6024c62\") " pod="kube-system/coredns-66bc5c9577-cs5l2"
	Nov 01 10:49:25 default-k8s-diff-port-014050 kubelet[1311]: W1101 10:49:25.461221    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/crio-b132c5fde5782fae0f3406389ef3212c194ac2066d1c60efb57e8f9880851a00 WatchSource:0}: Error finding container b132c5fde5782fae0f3406389ef3212c194ac2066d1c60efb57e8f9880851a00: Status 404 returned error can't find the container with id b132c5fde5782fae0f3406389ef3212c194ac2066d1c60efb57e8f9880851a00
	Nov 01 10:49:26 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:49:26.577479    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.577451092 podStartE2EDuration="41.577451092s" podCreationTimestamp="2025-11-01 10:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:49:26.558395061 +0000 UTC m=+48.608149104" watchObservedRunningTime="2025-11-01 10:49:26.577451092 +0000 UTC m=+48.627211749"
	Nov 01 10:49:28 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:49:28.589168    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cs5l2" podStartSLOduration=45.589146455 podStartE2EDuration="45.589146455s" podCreationTimestamp="2025-11-01 10:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:49:26.581377389 +0000 UTC m=+48.631131424" watchObservedRunningTime="2025-11-01 10:49:28.589146455 +0000 UTC m=+50.638900481"
	Nov 01 10:49:28 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:49:28.610766    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59brw\" (UniqueName: \"kubernetes.io/projected/98a75dc0-f396-4705-a6f4-5d99adc472af-kube-api-access-59brw\") pod \"busybox\" (UID: \"98a75dc0-f396-4705-a6f4-5d99adc472af\") " pod="default/busybox"
	Nov 01 10:49:28 default-k8s-diff-port-014050 kubelet[1311]: W1101 10:49:28.944151    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/crio-83c929970bc1c89777257a6de1296f1bc8f31757fc8b14dcca72d3548bbad846 WatchSource:0}: Error finding container 83c929970bc1c89777257a6de1296f1bc8f31757fc8b14dcca72d3548bbad846: Status 404 returned error can't find the container with id 83c929970bc1c89777257a6de1296f1bc8f31757fc8b14dcca72d3548bbad846
	Nov 01 10:49:31 default-k8s-diff-port-014050 kubelet[1311]: I1101 10:49:31.574929    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.248430435 podStartE2EDuration="3.574913943s" podCreationTimestamp="2025-11-01 10:49:28 +0000 UTC" firstStartedPulling="2025-11-01 10:49:28.94870075 +0000 UTC m=+50.998454777" lastFinishedPulling="2025-11-01 10:49:31.275184258 +0000 UTC m=+53.324938285" observedRunningTime="2025-11-01 10:49:31.5745399 +0000 UTC m=+53.624293927" watchObservedRunningTime="2025-11-01 10:49:31.574913943 +0000 UTC m=+53.624667970"
	
	
	==> storage-provisioner [df0c8f53f6e99cfdab7d05563ef9538ab07988ce98cb4131efc4d42f0758992c] <==
	I1101 10:49:25.536847       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:49:25.563182       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:49:25.563249       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:49:25.599231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:25.623810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:49:25.624057       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:49:25.624603       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56c59731-4a1e-4a0c-aa25-4af28f08f0eb", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-014050_05d20651-13fe-4f44-9de6-658dfab468e4 became leader
	I1101 10:49:25.624778       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-014050_05d20651-13fe-4f44-9de6-658dfab468e4!
	W1101 10:49:25.666533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:25.674851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:49:25.725867       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-014050_05d20651-13fe-4f44-9de6-658dfab468e4!
	W1101 10:49:27.678462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:27.683592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:29.687043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:29.692677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:31.695881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:31.701474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:33.704253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:33.708780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:35.712257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:35.721425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:37.724802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:49:37.734141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-014050 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (296.08033ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:50:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-499088 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-499088 describe deploy/metrics-server -n kube-system: exit status 1 (81.149821ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-499088 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-499088
helpers_test.go:243: (dbg) docker inspect embed-certs-499088:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3",
	        "Created": "2025-11-01T10:48:58.141820601Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482402,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:48:58.217306775Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/hostname",
	        "HostsPath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/hosts",
	        "LogPath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3-json.log",
	        "Name": "/embed-certs-499088",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-499088:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-499088",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3",
	                "LowerDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-499088",
	                "Source": "/var/lib/docker/volumes/embed-certs-499088/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-499088",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-499088",
	                "name.minikube.sigs.k8s.io": "embed-certs-499088",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8a2d091173bf4be1ed78fde788f846ac2dcbb03a1f7853aee695ab4246398dc6",
	            "SandboxKey": "/var/run/docker/netns/8a2d091173bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-499088": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:4e:af:fb:b3:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b7910a68b927d6e29fdad9c6f3b7dabb12d2d1799598af6a052e70fa72598bc5",
	                    "EndpointID": "68cdb5666b2ceaaf248ca43a123552acdca269e5154f65142586ee6f452da70f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-499088",
	                        "495a58a1ddf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-499088 -n embed-certs-499088
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-499088 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-499088 logs -n 25: (1.387357581s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-946953                                                                                                                                                                                                                  │ kubernetes-upgrade-946953    │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ delete  │ -p force-systemd-env-555657                                                                                                                                                                                                                   │ force-systemd-env-555657     │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p cert-options-186677 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:45 UTC │
	│ ssh     │ cert-options-186677 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ ssh     │ -p cert-options-186677 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ delete  │ -p cert-options-186677                                                                                                                                                                                                                        │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-245622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │                     │
	│ stop    │ -p old-k8s-version-245622 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-245622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:47 UTC │
	│ image   │ old-k8s-version-245622 image list --format=json                                                                                                                                                                                               │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │ 01 Nov 25 10:47 UTC │
	│ pause   │ -p old-k8s-version-245622 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │                     │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p cert-expiration-308600                                                                                                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-014050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-014050 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-014050 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:49:52
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:49:52.519951  485320 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:49:52.520168  485320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:49:52.520201  485320 out.go:374] Setting ErrFile to fd 2...
	I1101 10:49:52.520224  485320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:49:52.520522  485320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:49:52.520988  485320 out.go:368] Setting JSON to false
	I1101 10:49:52.521985  485320 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9144,"bootTime":1761985048,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:49:52.522086  485320 start.go:143] virtualization:  
	I1101 10:49:52.525052  485320 out.go:179] * [default-k8s-diff-port-014050] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:49:52.528870  485320 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:49:52.529038  485320 notify.go:221] Checking for updates...
	I1101 10:49:52.534948  485320 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:49:52.538301  485320 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:49:52.541327  485320 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:49:52.544308  485320 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:49:52.547268  485320 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:49:52.550796  485320 config.go:182] Loaded profile config "default-k8s-diff-port-014050": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:49:52.551363  485320 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:49:52.574337  485320 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:49:52.574466  485320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:49:52.640877  485320 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:49:52.630743203 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:49:52.641046  485320 docker.go:319] overlay module found
	I1101 10:49:52.644412  485320 out.go:179] * Using the docker driver based on existing profile
	W1101 10:49:49.215571  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	W1101 10:49:51.715016  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	I1101 10:49:52.647622  485320 start.go:309] selected driver: docker
	I1101 10:49:52.647646  485320 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-014050 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-014050 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:49:52.647764  485320 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:49:52.648472  485320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:49:52.714127  485320 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:49:52.70390283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:49:52.714470  485320 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:49:52.714498  485320 cni.go:84] Creating CNI manager for ""
	I1101 10:49:52.714550  485320 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:49:52.714691  485320 start.go:353] cluster config:
	{Name:default-k8s-diff-port-014050 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-014050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:49:52.719759  485320 out.go:179] * Starting "default-k8s-diff-port-014050" primary control-plane node in "default-k8s-diff-port-014050" cluster
	I1101 10:49:52.722554  485320 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:49:52.725458  485320 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:49:52.728361  485320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:49:52.728425  485320 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:49:52.728441  485320 cache.go:59] Caching tarball of preloaded images
	I1101 10:49:52.728530  485320 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:49:52.728546  485320 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:49:52.728668  485320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/config.json ...
	I1101 10:49:52.728898  485320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:49:52.749619  485320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:49:52.749645  485320 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:49:52.749660  485320 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:49:52.749683  485320 start.go:360] acquireMachinesLock for default-k8s-diff-port-014050: {Name:mkdb92ced3400a07956b26dddab4e9c1e4c33cbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:49:52.749747  485320 start.go:364] duration metric: took 37.941µs to acquireMachinesLock for "default-k8s-diff-port-014050"
	I1101 10:49:52.749769  485320 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:49:52.749775  485320 fix.go:54] fixHost starting: 
	I1101 10:49:52.750025  485320 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-014050 --format={{.State.Status}}
	I1101 10:49:52.767366  485320 fix.go:112] recreateIfNeeded on default-k8s-diff-port-014050: state=Stopped err=<nil>
	W1101 10:49:52.767417  485320 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:49:52.770757  485320 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-014050" ...
	I1101 10:49:52.770854  485320 cli_runner.go:164] Run: docker start default-k8s-diff-port-014050
	I1101 10:49:53.030700  485320 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-014050 --format={{.State.Status}}
	I1101 10:49:53.060800  485320 kic.go:430] container "default-k8s-diff-port-014050" state is running.
	I1101 10:49:53.061200  485320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-014050
	I1101 10:49:53.085559  485320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/config.json ...
	I1101 10:49:53.085895  485320 machine.go:94] provisionDockerMachine start ...
	I1101 10:49:53.085960  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:49:53.109365  485320 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:53.109791  485320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1101 10:49:53.109804  485320 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:49:53.110594  485320 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:49:56.268684  485320 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-014050
	
	I1101 10:49:56.268708  485320 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-014050"
	I1101 10:49:56.268778  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:49:56.291523  485320 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:56.291967  485320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1101 10:49:56.291994  485320 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-014050 && echo "default-k8s-diff-port-014050" | sudo tee /etc/hostname
	I1101 10:49:56.466158  485320 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-014050
	
	I1101 10:49:56.466232  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:49:56.483812  485320 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:56.484155  485320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1101 10:49:56.484173  485320 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-014050' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-014050/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-014050' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:49:56.633521  485320 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:49:56.633615  485320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:49:56.633660  485320 ubuntu.go:190] setting up certificates
	I1101 10:49:56.633698  485320 provision.go:84] configureAuth start
	I1101 10:49:56.633775  485320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-014050
	I1101 10:49:56.651635  485320 provision.go:143] copyHostCerts
	I1101 10:49:56.651715  485320 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:49:56.651730  485320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:49:56.651817  485320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:49:56.651913  485320 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:49:56.651919  485320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:49:56.651944  485320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:49:56.651995  485320 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:49:56.651999  485320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:49:56.652021  485320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:49:56.652067  485320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-014050 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-014050 localhost minikube]
	I1101 10:49:56.754756  485320 provision.go:177] copyRemoteCerts
	I1101 10:49:56.754833  485320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:49:56.754877  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:49:56.772998  485320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/default-k8s-diff-port-014050/id_rsa Username:docker}
	I1101 10:49:56.877005  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 10:49:56.896816  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:49:56.914617  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:49:56.933471  485320 provision.go:87] duration metric: took 299.736959ms to configureAuth
	I1101 10:49:56.933499  485320 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:49:56.933724  485320 config.go:182] Loaded profile config "default-k8s-diff-port-014050": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:49:56.933829  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:49:56.951443  485320 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:56.951761  485320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1101 10:49:56.951782  485320 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:49:57.273857  485320 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:49:57.273883  485320 machine.go:97] duration metric: took 4.187976045s to provisionDockerMachine
	I1101 10:49:57.273893  485320 start.go:293] postStartSetup for "default-k8s-diff-port-014050" (driver="docker")
	I1101 10:49:57.273905  485320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:49:57.273967  485320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:49:57.274016  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:49:57.295330  485320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/default-k8s-diff-port-014050/id_rsa Username:docker}
	I1101 10:49:57.406191  485320 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:49:57.409847  485320 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:49:57.409878  485320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:49:57.409890  485320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:49:57.409971  485320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:49:57.410087  485320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:49:57.410213  485320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:49:57.418229  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:49:57.448017  485320 start.go:296] duration metric: took 174.103135ms for postStartSetup
	I1101 10:49:57.448160  485320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:49:57.448291  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:49:57.467939  485320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/default-k8s-diff-port-014050/id_rsa Username:docker}
	W1101 10:49:53.715629  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	W1101 10:49:55.715903  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	I1101 10:49:57.571152  485320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:49:57.575846  485320 fix.go:56] duration metric: took 4.82606301s for fixHost
	I1101 10:49:57.575873  485320 start.go:83] releasing machines lock for "default-k8s-diff-port-014050", held for 4.826117467s
	I1101 10:49:57.575945  485320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-014050
	I1101 10:49:57.593248  485320 ssh_runner.go:195] Run: cat /version.json
	I1101 10:49:57.593301  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:49:57.593322  485320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:49:57.593392  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:49:57.614943  485320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/default-k8s-diff-port-014050/id_rsa Username:docker}
	I1101 10:49:57.616570  485320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/default-k8s-diff-port-014050/id_rsa Username:docker}
	I1101 10:49:57.803437  485320 ssh_runner.go:195] Run: systemctl --version
	I1101 10:49:57.809820  485320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:49:57.848282  485320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:49:57.852694  485320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:49:57.852783  485320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:49:57.861616  485320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:49:57.861643  485320 start.go:496] detecting cgroup driver to use...
	I1101 10:49:57.861677  485320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:49:57.861736  485320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:49:57.877894  485320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:49:57.891633  485320 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:49:57.891711  485320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:49:57.908634  485320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:49:57.922099  485320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:49:58.078829  485320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:49:58.202406  485320 docker.go:234] disabling docker service ...
	I1101 10:49:58.202489  485320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:49:58.217675  485320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:49:58.230810  485320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:49:58.344119  485320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:49:58.465473  485320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:49:58.482133  485320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:49:58.497180  485320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:49:58.497265  485320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:58.506866  485320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:49:58.506994  485320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:58.516492  485320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:58.526142  485320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:58.535505  485320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:49:58.544408  485320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:58.553987  485320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:58.563072  485320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:58.572307  485320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:49:58.580574  485320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:49:58.588478  485320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:49:58.704836  485320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:49:58.837538  485320 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:49:58.837659  485320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:49:58.841777  485320 start.go:564] Will wait 60s for crictl version
	I1101 10:49:58.841887  485320 ssh_runner.go:195] Run: which crictl
	I1101 10:49:58.845459  485320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:49:58.870545  485320 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:49:58.870677  485320 ssh_runner.go:195] Run: crio --version
	I1101 10:49:58.908215  485320 ssh_runner.go:195] Run: crio --version
	I1101 10:49:58.946532  485320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:49:58.949432  485320 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-014050 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:49:58.965838  485320 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:49:58.969858  485320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:49:58.979729  485320 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-014050 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-014050 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:49:58.979856  485320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:49:58.979915  485320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:49:59.015317  485320 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:49:59.015344  485320 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:49:59.015403  485320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:49:59.046968  485320 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:49:59.046993  485320 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:49:59.047002  485320 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1101 10:49:59.047106  485320 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-014050 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-014050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:49:59.047187  485320 ssh_runner.go:195] Run: crio config
	I1101 10:49:59.103368  485320 cni.go:84] Creating CNI manager for ""
	I1101 10:49:59.103391  485320 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:49:59.103412  485320 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:49:59.103437  485320 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-014050 NodeName:default-k8s-diff-port-014050 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:49:59.103563  485320 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-014050"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:49:59.103634  485320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:49:59.111262  485320 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:49:59.111340  485320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:49:59.118992  485320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 10:49:59.131369  485320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:49:59.144618  485320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 10:49:59.157655  485320 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:49:59.161219  485320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:49:59.171066  485320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:49:59.297466  485320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:49:59.314533  485320 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050 for IP: 192.168.85.2
	I1101 10:49:59.314558  485320 certs.go:195] generating shared ca certs ...
	I1101 10:49:59.314577  485320 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:59.314777  485320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:49:59.314869  485320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:49:59.314884  485320 certs.go:257] generating profile certs ...
	I1101 10:49:59.315022  485320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.key
	I1101 10:49:59.315125  485320 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/apiserver.key.bfeda6e3
	I1101 10:49:59.315211  485320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/proxy-client.key
	I1101 10:49:59.315365  485320 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:49:59.315425  485320 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:49:59.315446  485320 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:49:59.315495  485320 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:49:59.315553  485320 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:49:59.315582  485320 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:49:59.315663  485320 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:49:59.316392  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:49:59.340117  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:49:59.360734  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:49:59.379793  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:49:59.401049  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 10:49:59.430871  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:49:59.461433  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:49:59.486313  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:49:59.515383  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:49:59.538067  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:49:59.561685  485320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:49:59.581014  485320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:49:59.594716  485320 ssh_runner.go:195] Run: openssl version
	I1101 10:49:59.602018  485320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:49:59.611236  485320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:49:59.615078  485320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:49:59.615244  485320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:49:59.661006  485320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:49:59.669448  485320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:49:59.678045  485320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:49:59.681814  485320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:49:59.681881  485320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:49:59.725038  485320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:49:59.733729  485320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:49:59.742500  485320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:59.746511  485320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:59.746626  485320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:59.788775  485320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:49:59.797284  485320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:49:59.801314  485320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:49:59.844100  485320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:49:59.885269  485320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:49:59.927458  485320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:49:59.970022  485320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:50:00.021265  485320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:50:00.113883  485320 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-014050 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-014050 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:50:00.114046  485320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:50:00.114166  485320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:50:00.255508  485320 cri.go:89] found id: ""
	I1101 10:50:00.255661  485320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:50:00.274472  485320 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:50:00.274554  485320 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:50:00.274671  485320 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:50:00.289290  485320 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:50:00.290372  485320 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-014050" does not appear in /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:50:00.291113  485320 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-292445/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-014050" cluster setting kubeconfig missing "default-k8s-diff-port-014050" context setting]
	I1101 10:50:00.292396  485320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:50:00.294973  485320 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:50:00.312735  485320 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:50:00.312831  485320 kubeadm.go:602] duration metric: took 38.253347ms to restartPrimaryControlPlane
	I1101 10:50:00.312864  485320 kubeadm.go:403] duration metric: took 198.99113ms to StartCluster
	I1101 10:50:00.312939  485320 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:50:00.313066  485320 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:50:00.315188  485320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:50:00.315633  485320 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:50:00.316067  485320 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:50:00.316159  485320 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-014050"
	I1101 10:50:00.316175  485320 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-014050"
	W1101 10:50:00.316182  485320 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:50:00.316214  485320 host.go:66] Checking if "default-k8s-diff-port-014050" exists ...
	I1101 10:50:00.316741  485320 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-014050 --format={{.State.Status}}
	I1101 10:50:00.317386  485320 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-014050"
	I1101 10:50:00.317430  485320 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-014050"
	W1101 10:50:00.317467  485320 addons.go:248] addon dashboard should already be in state true
	I1101 10:50:00.317512  485320 host.go:66] Checking if "default-k8s-diff-port-014050" exists ...
	I1101 10:50:00.318095  485320 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-014050 --format={{.State.Status}}
	I1101 10:50:00.318366  485320 config.go:182] Loaded profile config "default-k8s-diff-port-014050": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:50:00.318539  485320 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-014050"
	I1101 10:50:00.318589  485320 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-014050"
	I1101 10:50:00.318936  485320 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-014050 --format={{.State.Status}}
	I1101 10:50:00.337284  485320 out.go:179] * Verifying Kubernetes components...
	I1101 10:50:00.354134  485320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:50:00.426054  485320 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:50:00.429694  485320 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:50:00.429724  485320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:50:00.438554  485320 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-014050"
	W1101 10:50:00.438584  485320 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:50:00.438616  485320 host.go:66] Checking if "default-k8s-diff-port-014050" exists ...
	I1101 10:50:00.442115  485320 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-014050 --format={{.State.Status}}
	I1101 10:50:00.443128  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:50:00.469393  485320 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:50:00.486396  485320 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:50:00.489421  485320 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:50:00.489450  485320 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:50:00.489528  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:50:00.509034  485320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/default-k8s-diff-port-014050/id_rsa Username:docker}
	I1101 10:50:00.525207  485320 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:50:00.525236  485320 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:50:00.525350  485320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:50:00.556420  485320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/default-k8s-diff-port-014050/id_rsa Username:docker}
	I1101 10:50:00.577245  485320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/default-k8s-diff-port-014050/id_rsa Username:docker}
	I1101 10:50:00.838428  485320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:50:00.865427  485320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:50:00.890667  485320 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-014050" to be "Ready" ...
	I1101 10:50:00.977234  485320 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:50:00.977261  485320 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:50:01.006728  485320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:50:01.049444  485320 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:50:01.049468  485320 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:50:01.105589  485320 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:50:01.105613  485320 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:50:01.169865  485320 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:50:01.169890  485320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:50:01.220214  485320 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:50:01.220289  485320 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:50:01.286198  485320 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:50:01.286283  485320 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:50:01.309583  485320 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:50:01.309659  485320 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:50:01.337606  485320 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:50:01.337686  485320 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:50:01.362491  485320 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:50:01.362568  485320 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:50:01.386449  485320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1101 10:49:57.715976  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	W1101 10:50:00.219346  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	I1101 10:50:05.836317  485320 node_ready.go:49] node "default-k8s-diff-port-014050" is "Ready"
	I1101 10:50:05.836353  485320 node_ready.go:38] duration metric: took 4.945597498s for node "default-k8s-diff-port-014050" to be "Ready" ...
	I1101 10:50:05.836366  485320 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:50:05.836442  485320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:50:07.489446  485320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.623944341s)
	I1101 10:50:07.489510  485320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.482756632s)
	I1101 10:50:07.489802  485320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.103267617s)
	I1101 10:50:07.489999  485320 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.653540497s)
	I1101 10:50:07.490023  485320 api_server.go:72] duration metric: took 7.174304863s to wait for apiserver process to appear ...
	I1101 10:50:07.490059  485320 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:50:07.490078  485320 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1101 10:50:07.493274  485320 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-014050 addons enable metrics-server
	
	I1101 10:50:07.498364  485320 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 10:50:07.499280  485320 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1101 10:50:07.500295  485320 api_server.go:141] control plane version: v1.34.1
	I1101 10:50:07.500322  485320 api_server.go:131] duration metric: took 10.255751ms to wait for apiserver health ...
	I1101 10:50:07.500331  485320 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:50:07.501186  485320 addons.go:515] duration metric: took 7.185115236s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:50:07.504983  485320 system_pods.go:59] 8 kube-system pods found
	I1101 10:50:07.505022  485320 system_pods.go:61] "coredns-66bc5c9577-cs5l2" [7b7eb708-3da6-4cad-ac28-f540c6024c62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:07.505032  485320 system_pods.go:61] "etcd-default-k8s-diff-port-014050" [ff74ab50-5145-4008-b755-f225069f6886] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:50:07.505040  485320 system_pods.go:61] "kindnet-j2vhl" [f4616783-98b7-4d54-b6b4-9f4b8bb30786] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:50:07.505048  485320 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-014050" [239a8d41-fbe2-4033-af92-c65be32b02a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:50:07.505058  485320 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-014050" [7f8d6010-1290-47f3-90fc-1691db840658] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:50:07.505065  485320 system_pods.go:61] "kube-proxy-jhf2k" [c34f672d-ef6a-48f1-bd77-63fac4364e78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:50:07.505072  485320 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-014050" [8fc96498-e77c-4641-93ff-499959f9b8b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:50:07.505082  485320 system_pods.go:61] "storage-provisioner" [faa93d67-48d9-4840-9a3c-57ffb8b81d04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:07.505088  485320 system_pods.go:74] duration metric: took 4.750822ms to wait for pod list to return data ...
	I1101 10:50:07.505095  485320 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:50:07.508170  485320 default_sa.go:45] found service account: "default"
	I1101 10:50:07.508194  485320 default_sa.go:55] duration metric: took 3.091992ms for default service account to be created ...
	I1101 10:50:07.508205  485320 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:50:07.511407  485320 system_pods.go:86] 8 kube-system pods found
	I1101 10:50:07.511494  485320 system_pods.go:89] "coredns-66bc5c9577-cs5l2" [7b7eb708-3da6-4cad-ac28-f540c6024c62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:07.511523  485320 system_pods.go:89] "etcd-default-k8s-diff-port-014050" [ff74ab50-5145-4008-b755-f225069f6886] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:50:07.511565  485320 system_pods.go:89] "kindnet-j2vhl" [f4616783-98b7-4d54-b6b4-9f4b8bb30786] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:50:07.511596  485320 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-014050" [239a8d41-fbe2-4033-af92-c65be32b02a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:50:07.511620  485320 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-014050" [7f8d6010-1290-47f3-90fc-1691db840658] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:50:07.511650  485320 system_pods.go:89] "kube-proxy-jhf2k" [c34f672d-ef6a-48f1-bd77-63fac4364e78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:50:07.511683  485320 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-014050" [8fc96498-e77c-4641-93ff-499959f9b8b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:50:07.511719  485320 system_pods.go:89] "storage-provisioner" [faa93d67-48d9-4840-9a3c-57ffb8b81d04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:07.511756  485320 system_pods.go:126] duration metric: took 3.529895ms to wait for k8s-apps to be running ...
	I1101 10:50:07.511783  485320 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:50:07.511867  485320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W1101 10:50:02.715874  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	W1101 10:50:04.715932  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	W1101 10:50:06.718677  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	I1101 10:50:07.529962  485320 system_svc.go:56] duration metric: took 18.170516ms WaitForService to wait for kubelet
	I1101 10:50:07.530032  485320 kubeadm.go:587] duration metric: took 7.214311539s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:50:07.530068  485320 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:50:07.534937  485320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:50:07.534965  485320 node_conditions.go:123] node cpu capacity is 2
	I1101 10:50:07.534979  485320 node_conditions.go:105] duration metric: took 4.888784ms to run NodePressure ...
	I1101 10:50:07.534992  485320 start.go:242] waiting for startup goroutines ...
	I1101 10:50:07.534999  485320 start.go:247] waiting for cluster config update ...
	I1101 10:50:07.535010  485320 start.go:256] writing updated cluster config ...
	I1101 10:50:07.535294  485320 ssh_runner.go:195] Run: rm -f paused
	I1101 10:50:07.539842  485320 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:50:07.544828  485320 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cs5l2" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:50:09.566245  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	W1101 10:50:12.057633  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	W1101 10:50:09.215746  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	W1101 10:50:11.215983  482008 node_ready.go:57] node "embed-certs-499088" has "Ready":"False" status (will retry)
	I1101 10:50:11.715683  482008 node_ready.go:49] node "embed-certs-499088" is "Ready"
	I1101 10:50:11.715720  482008 node_ready.go:38] duration metric: took 40.503417576s for node "embed-certs-499088" to be "Ready" ...
	I1101 10:50:11.715735  482008 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:50:11.715795  482008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:50:11.731161  482008 api_server.go:72] duration metric: took 41.480336607s to wait for apiserver process to appear ...
	I1101 10:50:11.731185  482008 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:50:11.731205  482008 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:50:11.740219  482008 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:50:11.741584  482008 api_server.go:141] control plane version: v1.34.1
	I1101 10:50:11.741651  482008 api_server.go:131] duration metric: took 10.457772ms to wait for apiserver health ...
	I1101 10:50:11.741674  482008 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:50:11.745808  482008 system_pods.go:59] 8 kube-system pods found
	I1101 10:50:11.745893  482008 system_pods.go:61] "coredns-66bc5c9577-pdh6r" [5b76d194-6689-4f01-aa5d-c2d0b63808ed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:11.745917  482008 system_pods.go:61] "etcd-embed-certs-499088" [2096f7af-f76e-4736-b77a-60c61146d542] Running
	I1101 10:50:11.745955  482008 system_pods.go:61] "kindnet-9sr9j" [a24caca1-3f4b-4d34-b663-c58a152bfa02] Running
	I1101 10:50:11.745978  482008 system_pods.go:61] "kube-apiserver-embed-certs-499088" [599d58e4-1782-4266-bc1e-0eda23f68ed9] Running
	I1101 10:50:11.745999  482008 system_pods.go:61] "kube-controller-manager-embed-certs-499088" [1cacce4c-3b57-4821-9a97-123186b7a8fd] Running
	I1101 10:50:11.746020  482008 system_pods.go:61] "kube-proxy-dqf86" [92677bfa-cc3f-4940-89f9-23d383e5dba9] Running
	I1101 10:50:11.746054  482008 system_pods.go:61] "kube-scheduler-embed-certs-499088" [3c20f3ae-0d1a-440e-86ae-4f691c6988cd] Running
	I1101 10:50:11.746075  482008 system_pods.go:61] "storage-provisioner" [5678aab9-c0e9-46c3-929c-04fd8bcc56db] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:11.746099  482008 system_pods.go:74] duration metric: took 4.407811ms to wait for pod list to return data ...
	I1101 10:50:11.746140  482008 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:50:11.749018  482008 default_sa.go:45] found service account: "default"
	I1101 10:50:11.749090  482008 default_sa.go:55] duration metric: took 2.931039ms for default service account to be created ...
	I1101 10:50:11.749114  482008 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:50:11.752769  482008 system_pods.go:86] 8 kube-system pods found
	I1101 10:50:11.752853  482008 system_pods.go:89] "coredns-66bc5c9577-pdh6r" [5b76d194-6689-4f01-aa5d-c2d0b63808ed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:11.752876  482008 system_pods.go:89] "etcd-embed-certs-499088" [2096f7af-f76e-4736-b77a-60c61146d542] Running
	I1101 10:50:11.752901  482008 system_pods.go:89] "kindnet-9sr9j" [a24caca1-3f4b-4d34-b663-c58a152bfa02] Running
	I1101 10:50:11.752974  482008 system_pods.go:89] "kube-apiserver-embed-certs-499088" [599d58e4-1782-4266-bc1e-0eda23f68ed9] Running
	I1101 10:50:11.753008  482008 system_pods.go:89] "kube-controller-manager-embed-certs-499088" [1cacce4c-3b57-4821-9a97-123186b7a8fd] Running
	I1101 10:50:11.753034  482008 system_pods.go:89] "kube-proxy-dqf86" [92677bfa-cc3f-4940-89f9-23d383e5dba9] Running
	I1101 10:50:11.753054  482008 system_pods.go:89] "kube-scheduler-embed-certs-499088" [3c20f3ae-0d1a-440e-86ae-4f691c6988cd] Running
	I1101 10:50:11.753094  482008 system_pods.go:89] "storage-provisioner" [5678aab9-c0e9-46c3-929c-04fd8bcc56db] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:11.753137  482008 retry.go:31] will retry after 307.880851ms: missing components: kube-dns
	I1101 10:50:12.067393  482008 system_pods.go:86] 8 kube-system pods found
	I1101 10:50:12.067476  482008 system_pods.go:89] "coredns-66bc5c9577-pdh6r" [5b76d194-6689-4f01-aa5d-c2d0b63808ed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:12.067499  482008 system_pods.go:89] "etcd-embed-certs-499088" [2096f7af-f76e-4736-b77a-60c61146d542] Running
	I1101 10:50:12.067523  482008 system_pods.go:89] "kindnet-9sr9j" [a24caca1-3f4b-4d34-b663-c58a152bfa02] Running
	I1101 10:50:12.067561  482008 system_pods.go:89] "kube-apiserver-embed-certs-499088" [599d58e4-1782-4266-bc1e-0eda23f68ed9] Running
	I1101 10:50:12.067581  482008 system_pods.go:89] "kube-controller-manager-embed-certs-499088" [1cacce4c-3b57-4821-9a97-123186b7a8fd] Running
	I1101 10:50:12.067603  482008 system_pods.go:89] "kube-proxy-dqf86" [92677bfa-cc3f-4940-89f9-23d383e5dba9] Running
	I1101 10:50:12.067641  482008 system_pods.go:89] "kube-scheduler-embed-certs-499088" [3c20f3ae-0d1a-440e-86ae-4f691c6988cd] Running
	I1101 10:50:12.067666  482008 system_pods.go:89] "storage-provisioner" [5678aab9-c0e9-46c3-929c-04fd8bcc56db] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:12.067720  482008 retry.go:31] will retry after 344.125944ms: missing components: kube-dns
	I1101 10:50:12.415803  482008 system_pods.go:86] 8 kube-system pods found
	I1101 10:50:12.415887  482008 system_pods.go:89] "coredns-66bc5c9577-pdh6r" [5b76d194-6689-4f01-aa5d-c2d0b63808ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:12.415911  482008 system_pods.go:89] "etcd-embed-certs-499088" [2096f7af-f76e-4736-b77a-60c61146d542] Running
	I1101 10:50:12.415934  482008 system_pods.go:89] "kindnet-9sr9j" [a24caca1-3f4b-4d34-b663-c58a152bfa02] Running
	I1101 10:50:12.415974  482008 system_pods.go:89] "kube-apiserver-embed-certs-499088" [599d58e4-1782-4266-bc1e-0eda23f68ed9] Running
	I1101 10:50:12.415993  482008 system_pods.go:89] "kube-controller-manager-embed-certs-499088" [1cacce4c-3b57-4821-9a97-123186b7a8fd] Running
	I1101 10:50:12.416012  482008 system_pods.go:89] "kube-proxy-dqf86" [92677bfa-cc3f-4940-89f9-23d383e5dba9] Running
	I1101 10:50:12.416046  482008 system_pods.go:89] "kube-scheduler-embed-certs-499088" [3c20f3ae-0d1a-440e-86ae-4f691c6988cd] Running
	I1101 10:50:12.416071  482008 system_pods.go:89] "storage-provisioner" [5678aab9-c0e9-46c3-929c-04fd8bcc56db] Running
	I1101 10:50:12.416096  482008 system_pods.go:126] duration metric: took 666.962332ms to wait for k8s-apps to be running ...
	I1101 10:50:12.416130  482008 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:50:12.416222  482008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:50:12.444185  482008 system_svc.go:56] duration metric: took 28.045267ms WaitForService to wait for kubelet
	I1101 10:50:12.444266  482008 kubeadm.go:587] duration metric: took 42.193445659s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:50:12.444304  482008 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:50:12.455409  482008 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:50:12.455491  482008 node_conditions.go:123] node cpu capacity is 2
	I1101 10:50:12.455536  482008 node_conditions.go:105] duration metric: took 11.192072ms to run NodePressure ...
	I1101 10:50:12.455579  482008 start.go:242] waiting for startup goroutines ...
	I1101 10:50:12.455606  482008 start.go:247] waiting for cluster config update ...
	I1101 10:50:12.455635  482008 start.go:256] writing updated cluster config ...
	I1101 10:50:12.456009  482008 ssh_runner.go:195] Run: rm -f paused
	I1101 10:50:12.460507  482008 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:50:12.476488  482008 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pdh6r" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:12.490115  482008 pod_ready.go:94] pod "coredns-66bc5c9577-pdh6r" is "Ready"
	I1101 10:50:12.490203  482008 pod_ready.go:86] duration metric: took 13.6403ms for pod "coredns-66bc5c9577-pdh6r" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:12.569381  482008 pod_ready.go:83] waiting for pod "etcd-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:12.575745  482008 pod_ready.go:94] pod "etcd-embed-certs-499088" is "Ready"
	I1101 10:50:12.575823  482008 pod_ready.go:86] duration metric: took 6.366443ms for pod "etcd-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:12.578910  482008 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:12.585396  482008 pod_ready.go:94] pod "kube-apiserver-embed-certs-499088" is "Ready"
	I1101 10:50:12.585471  482008 pod_ready.go:86] duration metric: took 6.489415ms for pod "kube-apiserver-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:12.590231  482008 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:12.865965  482008 pod_ready.go:94] pod "kube-controller-manager-embed-certs-499088" is "Ready"
	I1101 10:50:12.866046  482008 pod_ready.go:86] duration metric: took 275.741082ms for pod "kube-controller-manager-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:13.065613  482008 pod_ready.go:83] waiting for pod "kube-proxy-dqf86" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:13.466359  482008 pod_ready.go:94] pod "kube-proxy-dqf86" is "Ready"
	I1101 10:50:13.466433  482008 pod_ready.go:86] duration metric: took 400.742209ms for pod "kube-proxy-dqf86" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:13.666416  482008 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:14.065718  482008 pod_ready.go:94] pod "kube-scheduler-embed-certs-499088" is "Ready"
	I1101 10:50:14.065749  482008 pod_ready.go:86] duration metric: took 399.260669ms for pod "kube-scheduler-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:14.065764  482008 pod_ready.go:40] duration metric: took 1.605181533s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:50:14.155944  482008 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:50:14.160232  482008 out.go:179] * Done! kubectl is now configured to use "embed-certs-499088" cluster and "default" namespace by default
	W1101 10:50:14.551042  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	W1101 10:50:17.057492  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	W1101 10:50:19.558984  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	W1101 10:50:22.057511  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 10:50:11 embed-certs-499088 crio[838]: time="2025-11-01T10:50:11.929177157Z" level=info msg="Created container 28c5dd56100f4ee928127afb57b2ee500c117158d65a03bef1dcb491334142e2: kube-system/coredns-66bc5c9577-pdh6r/coredns" id=215362a0-36b6-42b0-88fd-25f4ef3fb8cf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:50:11 embed-certs-499088 crio[838]: time="2025-11-01T10:50:11.931747675Z" level=info msg="Starting container: 28c5dd56100f4ee928127afb57b2ee500c117158d65a03bef1dcb491334142e2" id=6ce34842-c837-46e1-8735-7c72cc3bab80 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:50:11 embed-certs-499088 crio[838]: time="2025-11-01T10:50:11.934898548Z" level=info msg="Started container" PID=1734 containerID=28c5dd56100f4ee928127afb57b2ee500c117158d65a03bef1dcb491334142e2 description=kube-system/coredns-66bc5c9577-pdh6r/coredns id=6ce34842-c837-46e1-8735-7c72cc3bab80 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b02dd61a04287f9b91608c1f6761b86fbfeaac0c5034d55a10d8ef0ffae9cb92
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.715614528Z" level=info msg="Running pod sandbox: default/busybox/POD" id=33459edc-85fa-4655-8ee1-365c49180adb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.715701864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.723630209Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2adc0ba804cc56ee2ed3dfb2d5e8a1b2a5af793ffb6b5547205211f8102efd6f UID:d07dd95a-7eea-459b-8c02-1476a2c71627 NetNS:/var/run/netns/7a7dfb2b-72ee-4648-a0aa-336032858ae7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db30}] Aliases:map[]}"
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.723819503Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.740041267Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2adc0ba804cc56ee2ed3dfb2d5e8a1b2a5af793ffb6b5547205211f8102efd6f UID:d07dd95a-7eea-459b-8c02-1476a2c71627 NetNS:/var/run/netns/7a7dfb2b-72ee-4648-a0aa-336032858ae7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db30}] Aliases:map[]}"
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.740333626Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.754707455Z" level=info msg="Ran pod sandbox 2adc0ba804cc56ee2ed3dfb2d5e8a1b2a5af793ffb6b5547205211f8102efd6f with infra container: default/busybox/POD" id=33459edc-85fa-4655-8ee1-365c49180adb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.756722087Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6d964ab1-bd6f-4717-9a8b-fdf566deffd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.756852936Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6d964ab1-bd6f-4717-9a8b-fdf566deffd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.756891541Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6d964ab1-bd6f-4717-9a8b-fdf566deffd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.764394348Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7a023781-01fb-4d96-ab01-2d380b8337bc name=/runtime.v1.ImageService/PullImage
	Nov 01 10:50:14 embed-certs-499088 crio[838]: time="2025-11-01T10:50:14.767333075Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:50:16 embed-certs-499088 crio[838]: time="2025-11-01T10:50:16.906084099Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=7a023781-01fb-4d96-ab01-2d380b8337bc name=/runtime.v1.ImageService/PullImage
	Nov 01 10:50:16 embed-certs-499088 crio[838]: time="2025-11-01T10:50:16.907326228Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=78e0c4ea-361b-4707-b5e8-80ccc2524b2b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:50:16 embed-certs-499088 crio[838]: time="2025-11-01T10:50:16.911650026Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c731f6a2-2716-44e1-9247-df8b5aac56df name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:50:16 embed-certs-499088 crio[838]: time="2025-11-01T10:50:16.919522132Z" level=info msg="Creating container: default/busybox/busybox" id=036eedce-2e39-4c74-a6e6-940528823812 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:50:16 embed-certs-499088 crio[838]: time="2025-11-01T10:50:16.919807025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:50:16 embed-certs-499088 crio[838]: time="2025-11-01T10:50:16.928751872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:50:16 embed-certs-499088 crio[838]: time="2025-11-01T10:50:16.929502944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:50:16 embed-certs-499088 crio[838]: time="2025-11-01T10:50:16.954165107Z" level=info msg="Created container d6df1708b651a82a7e8bb56ca77a9060231cc7fe03c396caf38d3c5b903ffc2a: default/busybox/busybox" id=036eedce-2e39-4c74-a6e6-940528823812 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:50:16 embed-certs-499088 crio[838]: time="2025-11-01T10:50:16.956878396Z" level=info msg="Starting container: d6df1708b651a82a7e8bb56ca77a9060231cc7fe03c396caf38d3c5b903ffc2a" id=95bc1b3e-6c88-411d-8125-bc3f085b85c8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:50:16 embed-certs-499088 crio[838]: time="2025-11-01T10:50:16.962116385Z" level=info msg="Started container" PID=1793 containerID=d6df1708b651a82a7e8bb56ca77a9060231cc7fe03c396caf38d3c5b903ffc2a description=default/busybox/busybox id=95bc1b3e-6c88-411d-8125-bc3f085b85c8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2adc0ba804cc56ee2ed3dfb2d5e8a1b2a5af793ffb6b5547205211f8102efd6f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	d6df1708b651a       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   2adc0ba804cc5       busybox                                      default
	28c5dd56100f4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   b02dd61a04287       coredns-66bc5c9577-pdh6r                     kube-system
	f5d02af29c876       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   01c094dc3ebe4       storage-provisioner                          kube-system
	5f6503b152a0d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   9362bdda3cdfd       kindnet-9sr9j                                kube-system
	fed6f3dd6791e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   c75ac9834cd33       kube-proxy-dqf86                             kube-system
	b748ea06a4cac       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   1d315c94220ad       etcd-embed-certs-499088                      kube-system
	ff37335964974       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   b4d0e408cba46       kube-controller-manager-embed-certs-499088   kube-system
	a56ab8eaadb57       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   95c7ce1e06a03       kube-scheduler-embed-certs-499088            kube-system
	34be399fb9b92       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   b6b54ec933137       kube-apiserver-embed-certs-499088            kube-system
	
	
	==> coredns [28c5dd56100f4ee928127afb57b2ee500c117158d65a03bef1dcb491334142e2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50235 - 59958 "HINFO IN 4221064166202535977.2845791351710056527. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011600601s
	
	
	==> describe nodes <==
	Name:               embed-certs-499088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-499088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=embed-certs-499088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_49_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:49:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-499088
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:50:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:50:11 +0000   Sat, 01 Nov 2025 10:49:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:50:11 +0000   Sat, 01 Nov 2025 10:49:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:50:11 +0000   Sat, 01 Nov 2025 10:49:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:50:11 +0000   Sat, 01 Nov 2025 10:50:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-499088
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                07472705-003c-41a7-ae50-6d94d68f067a
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-pdh6r                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-499088                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-9sr9j                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-499088             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-499088    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-dqf86                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-499088             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 53s   kube-proxy       
	  Normal   Starting                 61s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s   kubelet          Node embed-certs-499088 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s   kubelet          Node embed-certs-499088 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s   kubelet          Node embed-certs-499088 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s   node-controller  Node embed-certs-499088 event: Registered Node embed-certs-499088 in Controller
	  Normal   NodeReady                14s   kubelet          Node embed-certs-499088 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[ +42.559605] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b748ea06a4cac945ff1961974453a97468f381d5c4f26aafd57387323a9f5da3] <==
	{"level":"warn","ts":"2025-11-01T10:49:20.699585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.716256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.745847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.749177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.769005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.781043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.796796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.811211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.826245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.841158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.857662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.875665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.901015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.908016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.922937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.938252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.976814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.981613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:20.994346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.015422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.029978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.069837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.084577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.105607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.169004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37192","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:50:25 up  2:32,  0 user,  load average: 4.36, 3.61, 2.87
	Linux embed-certs-499088 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5f6503b152a0d0f17e9e6586b54fbfd8d2a07ad6478ebf3696664138b66e1562] <==
	I1101 10:49:30.941399       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:49:30.941805       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:49:30.942109       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:49:30.942158       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:49:30.942196       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:49:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:49:31.127877       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:49:31.127903       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:49:31.127911       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:49:31.128235       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:50:01.128116       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 10:50:01.128295       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:50:01.128319       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:50:01.129545       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 10:50:02.728681       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:50:02.728798       1 metrics.go:72] Registering metrics
	I1101 10:50:02.729048       1 controller.go:711] "Syncing nftables rules"
	I1101 10:50:11.134545       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:50:11.134665       1 main.go:301] handling current node
	I1101 10:50:21.130300       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:50:21.130410       1 main.go:301] handling current node
	
	
	==> kube-apiserver [34be399fb9b926287d79f647bee753689f7f79aff0063e3f587a21316a78a33f] <==
	I1101 10:49:21.991732       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:49:21.991948       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:49:22.019455       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:49:22.021360       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1101 10:49:22.022164       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1101 10:49:22.023323       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:49:22.024222       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:49:22.225302       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:49:22.734496       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:49:22.741927       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:49:22.741992       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:49:23.554549       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:49:23.617391       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:49:23.764775       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:49:23.775458       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 10:49:23.776725       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:49:23.782770       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:49:23.885623       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:49:25.056033       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:49:25.145468       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:49:25.169511       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:49:29.337441       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:49:29.637864       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 10:49:29.989356       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:49:29.994156       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [ff373359649745a20dd303270705d134d0fe11ec4a65e2ad389bce2fc29a6629] <==
	I1101 10:49:28.907357       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:49:28.916241       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:49:28.921782       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:49:28.928567       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:49:28.930885       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:49:28.930966       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:49:28.930998       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:49:28.931073       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:49:28.931767       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:49:28.931930       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:49:28.932655       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:49:28.932071       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:49:28.933513       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:49:28.933620       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:49:28.933631       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:49:28.933641       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:49:28.933975       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:49:28.938151       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:49:28.941745       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:49:28.941875       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:49:28.941938       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:49:28.941970       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:49:28.941998       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:49:28.969322       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-499088" podCIDRs=["10.244.0.0/24"]
	I1101 10:50:13.933594       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fed6f3dd6791e709b9bc272c9ccb493c90c37067338d5c63595ef02ef173e50e] <==
	I1101 10:49:30.951426       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:49:31.066730       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:49:31.167448       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:49:31.167515       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:49:31.167636       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:49:31.456073       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:49:31.456201       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:49:31.460885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:49:31.461633       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:49:31.462032       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:49:31.463682       1 config.go:200] "Starting service config controller"
	I1101 10:49:31.464728       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:49:31.464061       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:49:31.464833       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:49:31.464073       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:49:31.464902       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:49:31.464457       1 config.go:309] "Starting node config controller"
	I1101 10:49:31.464993       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:49:31.465027       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:49:31.565552       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:49:31.565684       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:49:31.565698       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a56ab8eaadb57c5e59164d5113864b9f01a56948e5ed9c4a2ee329dbc1864a9f] <==
	E1101 10:49:22.057253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:49:22.057293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:49:22.057328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:49:22.057367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:49:22.057978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:49:22.058077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:49:22.058120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:49:22.058151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:49:22.058191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:49:22.058228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:49:22.058262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:49:22.058297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:49:22.058333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:49:22.058361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:49:22.886891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:49:22.928173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:49:22.993203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:49:23.134452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:49:23.204381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:49:23.234821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:49:23.238337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:49:23.246063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:49:23.263537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:49:23.430720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 10:49:25.738368       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:49:29 embed-certs-499088 kubelet[1304]: I1101 10:49:29.813495    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a24caca1-3f4b-4d34-b663-c58a152bfa02-cni-cfg\") pod \"kindnet-9sr9j\" (UID: \"a24caca1-3f4b-4d34-b663-c58a152bfa02\") " pod="kube-system/kindnet-9sr9j"
	Nov 01 10:49:29 embed-certs-499088 kubelet[1304]: I1101 10:49:29.813514    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92677bfa-cc3f-4940-89f9-23d383e5dba9-xtables-lock\") pod \"kube-proxy-dqf86\" (UID: \"92677bfa-cc3f-4940-89f9-23d383e5dba9\") " pod="kube-system/kube-proxy-dqf86"
	Nov 01 10:49:29 embed-certs-499088 kubelet[1304]: I1101 10:49:29.813532    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92677bfa-cc3f-4940-89f9-23d383e5dba9-lib-modules\") pod \"kube-proxy-dqf86\" (UID: \"92677bfa-cc3f-4940-89f9-23d383e5dba9\") " pod="kube-system/kube-proxy-dqf86"
	Nov 01 10:49:29 embed-certs-499088 kubelet[1304]: I1101 10:49:29.813562    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6nqj\" (UniqueName: \"kubernetes.io/projected/a24caca1-3f4b-4d34-b663-c58a152bfa02-kube-api-access-n6nqj\") pod \"kindnet-9sr9j\" (UID: \"a24caca1-3f4b-4d34-b663-c58a152bfa02\") " pod="kube-system/kindnet-9sr9j"
	Nov 01 10:49:29 embed-certs-499088 kubelet[1304]: E1101 10:49:29.943226    1304 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 01 10:49:29 embed-certs-499088 kubelet[1304]: E1101 10:49:29.943422    1304 projected.go:196] Error preparing data for projected volume kube-api-access-98p57 for pod kube-system/kube-proxy-dqf86: configmap "kube-root-ca.crt" not found
	Nov 01 10:49:29 embed-certs-499088 kubelet[1304]: E1101 10:49:29.943563    1304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/92677bfa-cc3f-4940-89f9-23d383e5dba9-kube-api-access-98p57 podName:92677bfa-cc3f-4940-89f9-23d383e5dba9 nodeName:}" failed. No retries permitted until 2025-11-01 10:49:30.443534296 +0000 UTC m=+5.610450935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-98p57" (UniqueName: "kubernetes.io/projected/92677bfa-cc3f-4940-89f9-23d383e5dba9-kube-api-access-98p57") pod "kube-proxy-dqf86" (UID: "92677bfa-cc3f-4940-89f9-23d383e5dba9") : configmap "kube-root-ca.crt" not found
	Nov 01 10:49:29 embed-certs-499088 kubelet[1304]: E1101 10:49:29.945930    1304 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 01 10:49:29 embed-certs-499088 kubelet[1304]: E1101 10:49:29.946077    1304 projected.go:196] Error preparing data for projected volume kube-api-access-n6nqj for pod kube-system/kindnet-9sr9j: configmap "kube-root-ca.crt" not found
	Nov 01 10:49:29 embed-certs-499088 kubelet[1304]: E1101 10:49:29.946201    1304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a24caca1-3f4b-4d34-b663-c58a152bfa02-kube-api-access-n6nqj podName:a24caca1-3f4b-4d34-b663-c58a152bfa02 nodeName:}" failed. No retries permitted until 2025-11-01 10:49:30.44618109 +0000 UTC m=+5.613097729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6nqj" (UniqueName: "kubernetes.io/projected/a24caca1-3f4b-4d34-b663-c58a152bfa02-kube-api-access-n6nqj") pod "kindnet-9sr9j" (UID: "a24caca1-3f4b-4d34-b663-c58a152bfa02") : configmap "kube-root-ca.crt" not found
	Nov 01 10:49:30 embed-certs-499088 kubelet[1304]: I1101 10:49:30.529178    1304 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:49:30 embed-certs-499088 kubelet[1304]: W1101 10:49:30.662145    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-9362bdda3cdfd98059eb01b98d8df6e84d80facde6ac95e983702394e80487ed WatchSource:0}: Error finding container 9362bdda3cdfd98059eb01b98d8df6e84d80facde6ac95e983702394e80487ed: Status 404 returned error can't find the container with id 9362bdda3cdfd98059eb01b98d8df6e84d80facde6ac95e983702394e80487ed
	Nov 01 10:49:31 embed-certs-499088 kubelet[1304]: I1101 10:49:31.368889    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dqf86" podStartSLOduration=2.3688678850000002 podStartE2EDuration="2.368867885s" podCreationTimestamp="2025-11-01 10:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:49:31.363497119 +0000 UTC m=+6.530413766" watchObservedRunningTime="2025-11-01 10:49:31.368867885 +0000 UTC m=+6.535784524"
	Nov 01 10:49:31 embed-certs-499088 kubelet[1304]: I1101 10:49:31.456176    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9sr9j" podStartSLOduration=2.456157039 podStartE2EDuration="2.456157039s" podCreationTimestamp="2025-11-01 10:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:49:31.417361549 +0000 UTC m=+6.584278196" watchObservedRunningTime="2025-11-01 10:49:31.456157039 +0000 UTC m=+6.623073678"
	Nov 01 10:50:11 embed-certs-499088 kubelet[1304]: I1101 10:50:11.402379    1304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:50:11 embed-certs-499088 kubelet[1304]: I1101 10:50:11.536619    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b76d194-6689-4f01-aa5d-c2d0b63808ed-config-volume\") pod \"coredns-66bc5c9577-pdh6r\" (UID: \"5b76d194-6689-4f01-aa5d-c2d0b63808ed\") " pod="kube-system/coredns-66bc5c9577-pdh6r"
	Nov 01 10:50:11 embed-certs-499088 kubelet[1304]: I1101 10:50:11.536915    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5678aab9-c0e9-46c3-929c-04fd8bcc56db-tmp\") pod \"storage-provisioner\" (UID: \"5678aab9-c0e9-46c3-929c-04fd8bcc56db\") " pod="kube-system/storage-provisioner"
	Nov 01 10:50:11 embed-certs-499088 kubelet[1304]: I1101 10:50:11.537123    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfkvh\" (UniqueName: \"kubernetes.io/projected/5678aab9-c0e9-46c3-929c-04fd8bcc56db-kube-api-access-sfkvh\") pod \"storage-provisioner\" (UID: \"5678aab9-c0e9-46c3-929c-04fd8bcc56db\") " pod="kube-system/storage-provisioner"
	Nov 01 10:50:11 embed-certs-499088 kubelet[1304]: I1101 10:50:11.537242    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqg7w\" (UniqueName: \"kubernetes.io/projected/5b76d194-6689-4f01-aa5d-c2d0b63808ed-kube-api-access-zqg7w\") pod \"coredns-66bc5c9577-pdh6r\" (UID: \"5b76d194-6689-4f01-aa5d-c2d0b63808ed\") " pod="kube-system/coredns-66bc5c9577-pdh6r"
	Nov 01 10:50:11 embed-certs-499088 kubelet[1304]: W1101 10:50:11.803898    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-01c094dc3ebe4bf2ee00e879d1d4d3fc9bd252a049635096043b91b42ccb3369 WatchSource:0}: Error finding container 01c094dc3ebe4bf2ee00e879d1d4d3fc9bd252a049635096043b91b42ccb3369: Status 404 returned error can't find the container with id 01c094dc3ebe4bf2ee00e879d1d4d3fc9bd252a049635096043b91b42ccb3369
	Nov 01 10:50:11 embed-certs-499088 kubelet[1304]: W1101 10:50:11.829670    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-b02dd61a04287f9b91608c1f6761b86fbfeaac0c5034d55a10d8ef0ffae9cb92 WatchSource:0}: Error finding container b02dd61a04287f9b91608c1f6761b86fbfeaac0c5034d55a10d8ef0ffae9cb92: Status 404 returned error can't find the container with id b02dd61a04287f9b91608c1f6761b86fbfeaac0c5034d55a10d8ef0ffae9cb92
	Nov 01 10:50:12 embed-certs-499088 kubelet[1304]: I1101 10:50:12.398748    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pdh6r" podStartSLOduration=42.398727386 podStartE2EDuration="42.398727386s" podCreationTimestamp="2025-11-01 10:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:50:12.374552817 +0000 UTC m=+47.541469464" watchObservedRunningTime="2025-11-01 10:50:12.398727386 +0000 UTC m=+47.565644033"
	Nov 01 10:50:12 embed-certs-499088 kubelet[1304]: I1101 10:50:12.436080    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.436058672 podStartE2EDuration="41.436058672s" podCreationTimestamp="2025-11-01 10:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:50:12.402043339 +0000 UTC m=+47.568960011" watchObservedRunningTime="2025-11-01 10:50:12.436058672 +0000 UTC m=+47.602975311"
	Nov 01 10:50:14 embed-certs-499088 kubelet[1304]: I1101 10:50:14.561931    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g59cg\" (UniqueName: \"kubernetes.io/projected/d07dd95a-7eea-459b-8c02-1476a2c71627-kube-api-access-g59cg\") pod \"busybox\" (UID: \"d07dd95a-7eea-459b-8c02-1476a2c71627\") " pod="default/busybox"
	Nov 01 10:50:14 embed-certs-499088 kubelet[1304]: W1101 10:50:14.754581    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-2adc0ba804cc56ee2ed3dfb2d5e8a1b2a5af793ffb6b5547205211f8102efd6f WatchSource:0}: Error finding container 2adc0ba804cc56ee2ed3dfb2d5e8a1b2a5af793ffb6b5547205211f8102efd6f: Status 404 returned error can't find the container with id 2adc0ba804cc56ee2ed3dfb2d5e8a1b2a5af793ffb6b5547205211f8102efd6f
	
	
	==> storage-provisioner [f5d02af29c876713951819203239b1cdef84a8e6208c08be27438a60115e618f] <==
	I1101 10:50:11.907959       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:50:11.931379       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:50:11.931428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:50:11.950837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:11.960575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:50:11.960832       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:50:11.965574       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-499088_c5624c77-640a-497c-9dd9-c41bf8f0c2a7!
	I1101 10:50:11.971972       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5491653a-fc59-4529-adde-932caf894aba", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-499088_c5624c77-640a-497c-9dd9-c41bf8f0c2a7 became leader
	W1101 10:50:11.988124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:11.996007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:50:12.069855       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-499088_c5624c77-640a-497c-9dd9-c41bf8f0c2a7!
	W1101 10:50:14.001406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:14.008891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:16.011870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:16.017558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:18.025931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:18.037387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:20.041270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:20.047732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:22.051499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:22.059006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:24.062933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:24.072368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-499088 -n embed-certs-499088
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-499088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-014050 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-014050 --alsologtostderr -v=1: exit status 80 (2.56270742s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-014050 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:50:59.172078  490269 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:50:59.172182  490269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:50:59.172230  490269 out.go:374] Setting ErrFile to fd 2...
	I1101 10:50:59.172236  490269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:50:59.172588  490269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:50:59.172896  490269 out.go:368] Setting JSON to false
	I1101 10:50:59.172938  490269 mustload.go:66] Loading cluster: default-k8s-diff-port-014050
	I1101 10:50:59.173371  490269 config.go:182] Loaded profile config "default-k8s-diff-port-014050": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:50:59.173903  490269 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-014050 --format={{.State.Status}}
	I1101 10:50:59.195057  490269 host.go:66] Checking if "default-k8s-diff-port-014050" exists ...
	I1101 10:50:59.195391  490269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:50:59.302589  490269 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 10:50:59.288337084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:50:59.303230  490269 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-014050 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:50:59.308557  490269 out.go:179] * Pausing node default-k8s-diff-port-014050 ... 
	I1101 10:50:59.313448  490269 host.go:66] Checking if "default-k8s-diff-port-014050" exists ...
	I1101 10:50:59.313810  490269 ssh_runner.go:195] Run: systemctl --version
	I1101 10:50:59.313867  490269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-014050
	I1101 10:50:59.346324  490269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/default-k8s-diff-port-014050/id_rsa Username:docker}
	I1101 10:50:59.461166  490269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:50:59.497851  490269 pause.go:52] kubelet running: true
	I1101 10:50:59.497933  490269 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:50:59.860494  490269 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:50:59.860638  490269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:50:59.971973  490269 cri.go:89] found id: "735ad2f7c5490519a6ab1017484e2cdd0846aa4f309b74bb81418edd99ddcbd2"
	I1101 10:50:59.971997  490269 cri.go:89] found id: "8d8b622a022f7eeec2e8a7f9dc8fcd0660f5f440dd391b4c90267eacedb4922f"
	I1101 10:50:59.972003  490269 cri.go:89] found id: "c2c63b18b442a40d362431e7e36f733ae5f127ab2f711d4c305ce4437a974ab0"
	I1101 10:50:59.972007  490269 cri.go:89] found id: "ef258ac904917d8b16125eb5674949803504b091f5afd202b51ee52257d68a8c"
	I1101 10:50:59.972010  490269 cri.go:89] found id: "87107907b9299aea123d724a736202d76b246bb22d6a94bfc659f83cee018621"
	I1101 10:50:59.972014  490269 cri.go:89] found id: "3f156e559c73a53c1e70f973aee6243c1d143da20ede0269a961550635cfc68a"
	I1101 10:50:59.972036  490269 cri.go:89] found id: "caca3cf4c81ffb29f4d2c8e47aa22c4b3756d0636b9899246218e95da10ca2c5"
	I1101 10:50:59.972046  490269 cri.go:89] found id: "6a8858ab03de1ec723c664c7147120ea9ef2a84d11e0ceb376a78665d8f48565"
	I1101 10:50:59.972050  490269 cri.go:89] found id: "a30f2e6b80f40e1d33c1f0db013b621853606390ec749bfcaa7e3fa4a17d2938"
	I1101 10:50:59.972057  490269 cri.go:89] found id: "6405aad239fcab4f5a5fd8c6b59e5e65a0f59969a2961cc877b46076e9366cf8"
	I1101 10:50:59.972061  490269 cri.go:89] found id: "185dae504dec1c5863268ff5c50d7e568be7f24f21e036759e0abbb319841cf8"
	I1101 10:50:59.972064  490269 cri.go:89] found id: ""
	I1101 10:50:59.972127  490269 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:50:59.987203  490269 retry.go:31] will retry after 145.975319ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:50:59Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:51:00.133546  490269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:51:00.178899  490269 pause.go:52] kubelet running: false
	I1101 10:51:00.178974  490269 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:51:00.579227  490269 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:51:00.579364  490269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:51:00.673325  490269 cri.go:89] found id: "735ad2f7c5490519a6ab1017484e2cdd0846aa4f309b74bb81418edd99ddcbd2"
	I1101 10:51:00.673404  490269 cri.go:89] found id: "8d8b622a022f7eeec2e8a7f9dc8fcd0660f5f440dd391b4c90267eacedb4922f"
	I1101 10:51:00.673425  490269 cri.go:89] found id: "c2c63b18b442a40d362431e7e36f733ae5f127ab2f711d4c305ce4437a974ab0"
	I1101 10:51:00.673445  490269 cri.go:89] found id: "ef258ac904917d8b16125eb5674949803504b091f5afd202b51ee52257d68a8c"
	I1101 10:51:00.673478  490269 cri.go:89] found id: "87107907b9299aea123d724a736202d76b246bb22d6a94bfc659f83cee018621"
	I1101 10:51:00.673505  490269 cri.go:89] found id: "3f156e559c73a53c1e70f973aee6243c1d143da20ede0269a961550635cfc68a"
	I1101 10:51:00.673527  490269 cri.go:89] found id: "caca3cf4c81ffb29f4d2c8e47aa22c4b3756d0636b9899246218e95da10ca2c5"
	I1101 10:51:00.673546  490269 cri.go:89] found id: "6a8858ab03de1ec723c664c7147120ea9ef2a84d11e0ceb376a78665d8f48565"
	I1101 10:51:00.673565  490269 cri.go:89] found id: "a30f2e6b80f40e1d33c1f0db013b621853606390ec749bfcaa7e3fa4a17d2938"
	I1101 10:51:00.673596  490269 cri.go:89] found id: "6405aad239fcab4f5a5fd8c6b59e5e65a0f59969a2961cc877b46076e9366cf8"
	I1101 10:51:00.673619  490269 cri.go:89] found id: "185dae504dec1c5863268ff5c50d7e568be7f24f21e036759e0abbb319841cf8"
	I1101 10:51:00.673638  490269 cri.go:89] found id: ""
	I1101 10:51:00.673721  490269 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:00.686208  490269 retry.go:31] will retry after 468.803482ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:00Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:51:01.155957  490269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:51:01.175908  490269 pause.go:52] kubelet running: false
	I1101 10:51:01.176040  490269 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:51:01.512105  490269 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:51:01.512239  490269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:51:01.624122  490269 cri.go:89] found id: "735ad2f7c5490519a6ab1017484e2cdd0846aa4f309b74bb81418edd99ddcbd2"
	I1101 10:51:01.624196  490269 cri.go:89] found id: "8d8b622a022f7eeec2e8a7f9dc8fcd0660f5f440dd391b4c90267eacedb4922f"
	I1101 10:51:01.624216  490269 cri.go:89] found id: "c2c63b18b442a40d362431e7e36f733ae5f127ab2f711d4c305ce4437a974ab0"
	I1101 10:51:01.624233  490269 cri.go:89] found id: "ef258ac904917d8b16125eb5674949803504b091f5afd202b51ee52257d68a8c"
	I1101 10:51:01.624267  490269 cri.go:89] found id: "87107907b9299aea123d724a736202d76b246bb22d6a94bfc659f83cee018621"
	I1101 10:51:01.624291  490269 cri.go:89] found id: "3f156e559c73a53c1e70f973aee6243c1d143da20ede0269a961550635cfc68a"
	I1101 10:51:01.624310  490269 cri.go:89] found id: "caca3cf4c81ffb29f4d2c8e47aa22c4b3756d0636b9899246218e95da10ca2c5"
	I1101 10:51:01.624329  490269 cri.go:89] found id: "6a8858ab03de1ec723c664c7147120ea9ef2a84d11e0ceb376a78665d8f48565"
	I1101 10:51:01.624360  490269 cri.go:89] found id: "a30f2e6b80f40e1d33c1f0db013b621853606390ec749bfcaa7e3fa4a17d2938"
	I1101 10:51:01.624387  490269 cri.go:89] found id: "6405aad239fcab4f5a5fd8c6b59e5e65a0f59969a2961cc877b46076e9366cf8"
	I1101 10:51:01.624406  490269 cri.go:89] found id: "185dae504dec1c5863268ff5c50d7e568be7f24f21e036759e0abbb319841cf8"
	I1101 10:51:01.624451  490269 cri.go:89] found id: ""
	I1101 10:51:01.624548  490269 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:01.643697  490269 out.go:203] 
	W1101 10:51:01.646807  490269 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:51:01.646889  490269 out.go:285] * 
	* 
	W1101 10:51:01.653144  490269 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:51:01.658374  490269 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-014050 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-014050
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-014050:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6",
	        "Created": "2025-11-01T10:48:10.158242588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 485449,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:49:52.808860686Z",
	            "FinishedAt": "2025-11-01T10:49:51.925169928Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/hostname",
	        "HostsPath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/hosts",
	        "LogPath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6-json.log",
	        "Name": "/default-k8s-diff-port-014050",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-014050:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-014050",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6",
	                "LowerDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-014050",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-014050/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-014050",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-014050",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-014050",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c5ec49f2d04d5f6cb956ea83b0a7ff625c883184f2ea07e6364951afe370475",
	            "SandboxKey": "/var/run/docker/netns/8c5ec49f2d04",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-014050": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:dd:18:0a:9f:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0f438d7bf3e688fe5caa6340faa58ea25b1a6b5b20c8ce821e7570063338cd36",
	                    "EndpointID": "2a6a8d046dc09453fe0f86ad9320c5ed565cdf16edf5b9ed56a172236d163301",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-014050",
	                        "70da30e95fce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050: exit status 2 (487.714089ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-014050 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-014050 logs -n 25: (1.890408887s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-186677 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ delete  │ -p cert-options-186677                                                                                                                                                                                                                        │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-245622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │                     │
	│ stop    │ -p old-k8s-version-245622 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-245622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:47 UTC │
	│ image   │ old-k8s-version-245622 image list --format=json                                                                                                                                                                                               │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │ 01 Nov 25 10:47 UTC │
	│ pause   │ -p old-k8s-version-245622 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │                     │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p cert-expiration-308600                                                                                                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-014050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-014050 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-014050 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ stop    │ -p embed-certs-499088 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable dashboard -p embed-certs-499088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ image   │ default-k8s-diff-port-014050 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ pause   │ -p default-k8s-diff-port-014050 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:50:38
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:50:38.549998  488285 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:50:38.550733  488285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:50:38.550879  488285 out.go:374] Setting ErrFile to fd 2...
	I1101 10:50:38.550907  488285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:50:38.551677  488285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:50:38.552815  488285 out.go:368] Setting JSON to false
	I1101 10:50:38.554887  488285 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9191,"bootTime":1761985048,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:50:38.554964  488285 start.go:143] virtualization:  
	I1101 10:50:38.558075  488285 out.go:179] * [embed-certs-499088] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:50:38.561895  488285 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:50:38.562005  488285 notify.go:221] Checking for updates...
	I1101 10:50:38.568211  488285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:50:38.571234  488285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:50:38.574288  488285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:50:38.577278  488285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:50:38.580302  488285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:50:38.583841  488285 config.go:182] Loaded profile config "embed-certs-499088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:50:38.584406  488285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:50:38.613088  488285 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:50:38.613223  488285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:50:38.675905  488285 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:50:38.666767407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:50:38.676022  488285 docker.go:319] overlay module found
	I1101 10:50:38.679139  488285 out.go:179] * Using the docker driver based on existing profile
	I1101 10:50:38.681995  488285 start.go:309] selected driver: docker
	I1101 10:50:38.682019  488285 start.go:930] validating driver "docker" against &{Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:50:38.682131  488285 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:50:38.684113  488285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:50:38.753602  488285 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:50:38.744169812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:50:38.753943  488285 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:50:38.753979  488285 cni.go:84] Creating CNI manager for ""
	I1101 10:50:38.754040  488285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:50:38.754084  488285 start.go:353] cluster config:
	{Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:50:38.760374  488285 out.go:179] * Starting "embed-certs-499088" primary control-plane node in "embed-certs-499088" cluster
	I1101 10:50:38.765007  488285 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:50:38.767974  488285 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:50:38.774568  488285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:50:38.774634  488285 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:50:38.774648  488285 cache.go:59] Caching tarball of preloaded images
	I1101 10:50:38.774669  488285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:50:38.774748  488285 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:50:38.774759  488285 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:50:38.774878  488285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/config.json ...
	I1101 10:50:38.796076  488285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:50:38.796102  488285 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:50:38.796120  488285 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:50:38.796144  488285 start.go:360] acquireMachinesLock for embed-certs-499088: {Name:mk5ad922c2d628b6bdeae9b2175ff7077c575607 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:50:38.796206  488285 start.go:364] duration metric: took 38.458µs to acquireMachinesLock for "embed-certs-499088"
	I1101 10:50:38.796232  488285 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:50:38.796238  488285 fix.go:54] fixHost starting: 
	I1101 10:50:38.796499  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:50:38.815060  488285 fix.go:112] recreateIfNeeded on embed-certs-499088: state=Stopped err=<nil>
	W1101 10:50:38.815090  488285 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:50:38.057492  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	W1101 10:50:40.551501  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	I1101 10:50:38.818020  488285 out.go:252] * Restarting existing docker container for "embed-certs-499088" ...
	I1101 10:50:38.818114  488285 cli_runner.go:164] Run: docker start embed-certs-499088
	I1101 10:50:39.164453  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:50:39.184448  488285 kic.go:430] container "embed-certs-499088" state is running.
	I1101 10:50:39.184853  488285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-499088
	I1101 10:50:39.208443  488285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/config.json ...
	I1101 10:50:39.208683  488285 machine.go:94] provisionDockerMachine start ...
	I1101 10:50:39.208754  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:39.232194  488285 main.go:143] libmachine: Using SSH client type: native
	I1101 10:50:39.232774  488285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1101 10:50:39.232791  488285 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:50:39.233627  488285 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:50:42.392682  488285 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-499088
	
	I1101 10:50:42.392719  488285 ubuntu.go:182] provisioning hostname "embed-certs-499088"
	I1101 10:50:42.392795  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:42.411745  488285 main.go:143] libmachine: Using SSH client type: native
	I1101 10:50:42.412089  488285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1101 10:50:42.412106  488285 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-499088 && echo "embed-certs-499088" | sudo tee /etc/hostname
	I1101 10:50:42.574692  488285 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-499088
	
	I1101 10:50:42.574785  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:42.591738  488285 main.go:143] libmachine: Using SSH client type: native
	I1101 10:50:42.592040  488285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1101 10:50:42.592062  488285 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-499088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-499088/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-499088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:50:42.741375  488285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:50:42.741401  488285 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:50:42.741423  488285 ubuntu.go:190] setting up certificates
	I1101 10:50:42.741439  488285 provision.go:84] configureAuth start
	I1101 10:50:42.741499  488285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-499088
	I1101 10:50:42.758404  488285 provision.go:143] copyHostCerts
	I1101 10:50:42.758479  488285 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:50:42.758501  488285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:50:42.758579  488285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:50:42.758688  488285 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:50:42.758707  488285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:50:42.758735  488285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:50:42.758801  488285 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:50:42.758806  488285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:50:42.758830  488285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:50:42.758886  488285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.embed-certs-499088 san=[127.0.0.1 192.168.76.2 embed-certs-499088 localhost minikube]
	I1101 10:50:42.931353  488285 provision.go:177] copyRemoteCerts
	I1101 10:50:42.931428  488285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:50:42.931467  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:42.952259  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:43.069072  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:50:43.090198  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 10:50:43.110553  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:50:43.130331  488285 provision.go:87] duration metric: took 388.86823ms to configureAuth
	I1101 10:50:43.130361  488285 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:50:43.130558  488285 config.go:182] Loaded profile config "embed-certs-499088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:50:43.130664  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:43.148285  488285 main.go:143] libmachine: Using SSH client type: native
	I1101 10:50:43.148599  488285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1101 10:50:43.148618  488285 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:50:43.481474  488285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:50:43.481499  488285 machine.go:97] duration metric: took 4.272798667s to provisionDockerMachine
	I1101 10:50:43.481510  488285 start.go:293] postStartSetup for "embed-certs-499088" (driver="docker")
	I1101 10:50:43.481521  488285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:50:43.481618  488285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:50:43.481664  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:43.500978  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:43.609619  488285 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:50:43.613371  488285 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:50:43.613403  488285 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:50:43.613415  488285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:50:43.613473  488285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:50:43.613555  488285 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:50:43.613661  488285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:50:43.621731  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:50:43.640961  488285 start.go:296] duration metric: took 159.435089ms for postStartSetup
	I1101 10:50:43.641095  488285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:50:43.641173  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:43.658498  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:43.758021  488285 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:50:43.762870  488285 fix.go:56] duration metric: took 4.966625241s for fixHost
	I1101 10:50:43.762895  488285 start.go:83] releasing machines lock for "embed-certs-499088", held for 4.966674374s
	I1101 10:50:43.762972  488285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-499088
	I1101 10:50:43.779802  488285 ssh_runner.go:195] Run: cat /version.json
	I1101 10:50:43.779863  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:43.780131  488285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:50:43.780188  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:43.802260  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:43.814524  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:43.908603  488285 ssh_runner.go:195] Run: systemctl --version
	I1101 10:50:44.010009  488285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:50:44.057051  488285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:50:44.062205  488285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:50:44.062345  488285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:50:44.071919  488285 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:50:44.071944  488285 start.go:496] detecting cgroup driver to use...
	I1101 10:50:44.071977  488285 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:50:44.072045  488285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:50:44.094313  488285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:50:44.110089  488285 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:50:44.110169  488285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:50:44.131878  488285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:50:44.145089  488285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:50:44.264672  488285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:50:44.387081  488285 docker.go:234] disabling docker service ...
	I1101 10:50:44.387180  488285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:50:44.402847  488285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:50:44.417055  488285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:50:44.537563  488285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:50:44.682650  488285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:50:44.696376  488285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:50:44.714166  488285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:50:44.714288  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.723731  488285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:50:44.723830  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.732859  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.741864  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.750838  488285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:50:44.758966  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.767631  488285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.776715  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.785960  488285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:50:44.793489  488285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:50:44.801964  488285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:50:44.925705  488285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:50:45.088150  488285 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:50:45.088328  488285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:50:45.095306  488285 start.go:564] Will wait 60s for crictl version
	I1101 10:50:45.095488  488285 ssh_runner.go:195] Run: which crictl
	I1101 10:50:45.107278  488285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:50:45.141128  488285 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:50:45.141234  488285 ssh_runner.go:195] Run: crio --version
	I1101 10:50:45.199032  488285 ssh_runner.go:195] Run: crio --version
	I1101 10:50:45.262731  488285 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1101 10:50:43.052250  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	I1101 10:50:44.550344  485320 pod_ready.go:94] pod "coredns-66bc5c9577-cs5l2" is "Ready"
	I1101 10:50:44.550374  485320 pod_ready.go:86] duration metric: took 37.005472163s for pod "coredns-66bc5c9577-cs5l2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.552884  485320 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.557868  485320 pod_ready.go:94] pod "etcd-default-k8s-diff-port-014050" is "Ready"
	I1101 10:50:44.557894  485320 pod_ready.go:86] duration metric: took 4.98563ms for pod "etcd-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.560518  485320 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.565049  485320 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-014050" is "Ready"
	I1101 10:50:44.565077  485320 pod_ready.go:86] duration metric: took 4.532957ms for pod "kube-apiserver-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.567233  485320 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.748904  485320 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-014050" is "Ready"
	I1101 10:50:44.749029  485320 pod_ready.go:86] duration metric: took 181.730345ms for pod "kube-controller-manager-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.948273  485320 pod_ready.go:83] waiting for pod "kube-proxy-jhf2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:45.350400  485320 pod_ready.go:94] pod "kube-proxy-jhf2k" is "Ready"
	I1101 10:50:45.350437  485320 pod_ready.go:86] duration metric: took 402.134626ms for pod "kube-proxy-jhf2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:45.549753  485320 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:45.949422  485320 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-014050" is "Ready"
	I1101 10:50:45.949446  485320 pod_ready.go:86] duration metric: took 399.663333ms for pod "kube-scheduler-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:45.949458  485320 pod_ready.go:40] duration metric: took 38.409530276s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:50:46.060128  485320 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:50:46.065091  485320 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-014050" cluster and "default" namespace by default
	I1101 10:50:45.271958  488285 cli_runner.go:164] Run: docker network inspect embed-certs-499088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:50:45.295251  488285 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:50:45.300944  488285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:50:45.314777  488285 kubeadm.go:884] updating cluster {Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:50:45.314919  488285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:50:45.315002  488285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:50:45.383915  488285 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:50:45.383946  488285 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:50:45.384012  488285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:50:45.424284  488285 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:50:45.424312  488285 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:50:45.424320  488285 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:50:45.424482  488285 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-499088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:50:45.424601  488285 ssh_runner.go:195] Run: crio config
	I1101 10:50:45.504287  488285 cni.go:84] Creating CNI manager for ""
	I1101 10:50:45.504310  488285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:50:45.504328  488285 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:50:45.504377  488285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-499088 NodeName:embed-certs-499088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:50:45.504549  488285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-499088"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:50:45.504664  488285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:50:45.513193  488285 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:50:45.513292  488285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:50:45.521380  488285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 10:50:45.535953  488285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:50:45.553062  488285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 10:50:45.567879  488285 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:50:45.571548  488285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:50:45.581948  488285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:50:45.702405  488285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:50:45.720994  488285 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088 for IP: 192.168.76.2
	I1101 10:50:45.721074  488285 certs.go:195] generating shared ca certs ...
	I1101 10:50:45.721116  488285 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:50:45.721293  488285 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:50:45.721388  488285 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:50:45.721431  488285 certs.go:257] generating profile certs ...
	I1101 10:50:45.721548  488285 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/client.key
	I1101 10:50:45.721645  488285 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.key.ee4ebe0a
	I1101 10:50:45.721709  488285 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.key
	I1101 10:50:45.721850  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:50:45.721909  488285 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:50:45.721942  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:50:45.721998  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:50:45.722048  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:50:45.722092  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:50:45.722159  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:50:45.722863  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:50:45.756525  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:50:45.784717  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:50:45.808069  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:50:45.827699  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:50:45.849696  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:50:45.882215  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:50:45.909989  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:50:45.932538  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:50:45.966980  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:50:45.991720  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:50:46.012132  488285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:50:46.026685  488285 ssh_runner.go:195] Run: openssl version
	I1101 10:50:46.034206  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:50:46.049726  488285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:50:46.055855  488285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:50:46.055996  488285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:50:46.134344  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:50:46.144521  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:50:46.154134  488285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:50:46.159177  488285 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:50:46.159245  488285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:50:46.206851  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:50:46.217337  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:50:46.231816  488285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:50:46.236855  488285 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:50:46.237342  488285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:50:46.288488  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:50:46.298986  488285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:50:46.304312  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:50:46.347905  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:50:46.437169  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:50:46.552391  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:50:46.622128  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:50:46.716962  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:50:46.816487  488285 kubeadm.go:401] StartCluster: {Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:50:46.816579  488285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:50:46.816655  488285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:50:46.858949  488285 cri.go:89] found id: "0de30b77d1ca10da59b96521a28d795e3e2f58d2bf5933e2fc6be1269644272f"
	I1101 10:50:46.858969  488285 cri.go:89] found id: "a312b63badfe91286205ab3f2506b1f28b4e42298c8d0022b0e1c17bcddc1e12"
	I1101 10:50:46.858974  488285 cri.go:89] found id: "0ef612cf67931e99b0ff0b2cd78a42bcb290e5834448357a04f331cca1ab13cc"
	I1101 10:50:46.858979  488285 cri.go:89] found id: "59e8eb3202b226a9242a2418d10ad312d3fe21ba3c8163fbf7bfede124b48607"
	I1101 10:50:46.858982  488285 cri.go:89] found id: ""
	I1101 10:50:46.859036  488285 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:50:46.877243  488285 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:50:46Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:50:46.877337  488285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:50:46.889039  488285 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:50:46.889056  488285 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:50:46.889111  488285 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:50:46.901661  488285 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:50:46.902197  488285 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-499088" does not appear in /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:50:46.902429  488285 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-292445/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-499088" cluster setting kubeconfig missing "embed-certs-499088" context setting]
	I1101 10:50:46.902893  488285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:50:46.904511  488285 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:50:46.915462  488285 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:50:46.915538  488285 kubeadm.go:602] duration metric: took 26.475948ms to restartPrimaryControlPlane
	I1101 10:50:46.915561  488285 kubeadm.go:403] duration metric: took 99.084823ms to StartCluster
	I1101 10:50:46.915610  488285 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:50:46.915697  488285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:50:46.917050  488285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:50:46.917612  488285 config.go:182] Loaded profile config "embed-certs-499088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:50:46.917754  488285 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:50:46.917839  488285 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-499088"
	I1101 10:50:46.917858  488285 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-499088"
	W1101 10:50:46.917864  488285 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:50:46.917887  488285 host.go:66] Checking if "embed-certs-499088" exists ...
	I1101 10:50:46.918428  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:50:46.918592  488285 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:50:46.919056  488285 addons.go:70] Setting default-storageclass=true in profile "embed-certs-499088"
	I1101 10:50:46.919078  488285 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-499088"
	I1101 10:50:46.919310  488285 addons.go:70] Setting dashboard=true in profile "embed-certs-499088"
	I1101 10:50:46.919345  488285 addons.go:239] Setting addon dashboard=true in "embed-certs-499088"
	I1101 10:50:46.919349  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	W1101 10:50:46.919352  488285 addons.go:248] addon dashboard should already be in state true
	I1101 10:50:46.919377  488285 host.go:66] Checking if "embed-certs-499088" exists ...
	I1101 10:50:46.919953  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:50:46.924531  488285 out.go:179] * Verifying Kubernetes components...
	I1101 10:50:46.927521  488285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:50:47.002395  488285 addons.go:239] Setting addon default-storageclass=true in "embed-certs-499088"
	W1101 10:50:47.002427  488285 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:50:47.002455  488285 host.go:66] Checking if "embed-certs-499088" exists ...
	I1101 10:50:47.002920  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:50:47.003051  488285 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:50:47.007191  488285 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:50:47.007215  488285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:50:47.007294  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:47.017236  488285 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:50:47.025814  488285 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:50:47.033096  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:50:47.033125  488285 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:50:47.033203  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:47.062589  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:47.068635  488285 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:50:47.068663  488285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:50:47.068726  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:47.082069  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:47.107309  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:47.299759  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:50:47.299839  488285 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:50:47.346567  488285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:50:47.357815  488285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:50:47.368367  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:50:47.368392  488285 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:50:47.431418  488285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:50:47.438479  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:50:47.438552  488285 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:50:47.509622  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:50:47.509646  488285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:50:47.602280  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:50:47.602305  488285 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:50:47.672209  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:50:47.672234  488285 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:50:47.695472  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:50:47.695499  488285 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:50:47.716882  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:50:47.716907  488285 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:50:47.737448  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:50:47.737481  488285 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:50:47.760742  488285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:50:51.363797  488285 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.00589875s)
	I1101 10:50:51.363852  488285 node_ready.go:35] waiting up to 6m0s for node "embed-certs-499088" to be "Ready" ...
	I1101 10:50:51.363929  488285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.01725394s)
	I1101 10:50:51.392228  488285 node_ready.go:49] node "embed-certs-499088" is "Ready"
	I1101 10:50:51.392258  488285 node_ready.go:38] duration metric: took 28.366517ms for node "embed-certs-499088" to be "Ready" ...
	I1101 10:50:51.392272  488285 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:50:51.392373  488285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:50:52.774878  488285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.343374731s)
	I1101 10:50:52.774993  488285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.014220126s)
	I1101 10:50:52.775137  488285 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.382748528s)
	I1101 10:50:52.775157  488285 api_server.go:72] duration metric: took 5.85651804s to wait for apiserver process to appear ...
	I1101 10:50:52.775163  488285 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:50:52.775179  488285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:50:52.778080  488285 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-499088 addons enable metrics-server
	
	I1101 10:50:52.781434  488285 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1101 10:50:52.784524  488285 addons.go:515] duration metric: took 5.866740158s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1101 10:50:52.785190  488285 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:50:52.785212  488285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:50:53.275859  488285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:50:53.288761  488285 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:50:53.290516  488285 api_server.go:141] control plane version: v1.34.1
	I1101 10:50:53.290541  488285 api_server.go:131] duration metric: took 515.371866ms to wait for apiserver health ...
	I1101 10:50:53.290549  488285 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:50:53.293715  488285 system_pods.go:59] 8 kube-system pods found
	I1101 10:50:53.293839  488285 system_pods.go:61] "coredns-66bc5c9577-pdh6r" [5b76d194-6689-4f01-aa5d-c2d0b63808ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:53.293877  488285 system_pods.go:61] "etcd-embed-certs-499088" [2096f7af-f76e-4736-b77a-60c61146d542] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:50:53.293916  488285 system_pods.go:61] "kindnet-9sr9j" [a24caca1-3f4b-4d34-b663-c58a152bfa02] Running
	I1101 10:50:53.293944  488285 system_pods.go:61] "kube-apiserver-embed-certs-499088" [599d58e4-1782-4266-bc1e-0eda23f68ed9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:50:53.293969  488285 system_pods.go:61] "kube-controller-manager-embed-certs-499088" [1cacce4c-3b57-4821-9a97-123186b7a8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:50:53.294004  488285 system_pods.go:61] "kube-proxy-dqf86" [92677bfa-cc3f-4940-89f9-23d383e5dba9] Running
	I1101 10:50:53.294035  488285 system_pods.go:61] "kube-scheduler-embed-certs-499088" [3c20f3ae-0d1a-440e-86ae-4f691c6988cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:50:53.294066  488285 system_pods.go:61] "storage-provisioner" [5678aab9-c0e9-46c3-929c-04fd8bcc56db] Running
	I1101 10:50:53.294106  488285 system_pods.go:74] duration metric: took 3.549646ms to wait for pod list to return data ...
	I1101 10:50:53.294127  488285 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:50:53.296630  488285 default_sa.go:45] found service account: "default"
	I1101 10:50:53.296697  488285 default_sa.go:55] duration metric: took 2.547839ms for default service account to be created ...
	I1101 10:50:53.296722  488285 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:50:53.299825  488285 system_pods.go:86] 8 kube-system pods found
	I1101 10:50:53.299932  488285 system_pods.go:89] "coredns-66bc5c9577-pdh6r" [5b76d194-6689-4f01-aa5d-c2d0b63808ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:53.299974  488285 system_pods.go:89] "etcd-embed-certs-499088" [2096f7af-f76e-4736-b77a-60c61146d542] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:50:53.300000  488285 system_pods.go:89] "kindnet-9sr9j" [a24caca1-3f4b-4d34-b663-c58a152bfa02] Running
	I1101 10:50:53.300025  488285 system_pods.go:89] "kube-apiserver-embed-certs-499088" [599d58e4-1782-4266-bc1e-0eda23f68ed9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:50:53.300071  488285 system_pods.go:89] "kube-controller-manager-embed-certs-499088" [1cacce4c-3b57-4821-9a97-123186b7a8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:50:53.300098  488285 system_pods.go:89] "kube-proxy-dqf86" [92677bfa-cc3f-4940-89f9-23d383e5dba9] Running
	I1101 10:50:53.300123  488285 system_pods.go:89] "kube-scheduler-embed-certs-499088" [3c20f3ae-0d1a-440e-86ae-4f691c6988cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:50:53.300162  488285 system_pods.go:89] "storage-provisioner" [5678aab9-c0e9-46c3-929c-04fd8bcc56db] Running
	I1101 10:50:53.300189  488285 system_pods.go:126] duration metric: took 3.448442ms to wait for k8s-apps to be running ...
	I1101 10:50:53.300211  488285 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:50:53.300298  488285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:50:53.315857  488285 system_svc.go:56] duration metric: took 15.636297ms WaitForService to wait for kubelet
	I1101 10:50:53.315932  488285 kubeadm.go:587] duration metric: took 6.397291546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:50:53.315986  488285 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:50:53.320337  488285 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:50:53.320369  488285 node_conditions.go:123] node cpu capacity is 2
	I1101 10:50:53.320382  488285 node_conditions.go:105] duration metric: took 4.37792ms to run NodePressure ...
	I1101 10:50:53.320395  488285 start.go:242] waiting for startup goroutines ...
	I1101 10:50:53.320403  488285 start.go:247] waiting for cluster config update ...
	I1101 10:50:53.320414  488285 start.go:256] writing updated cluster config ...
	I1101 10:50:53.320677  488285 ssh_runner.go:195] Run: rm -f paused
	I1101 10:50:53.326556  488285 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:50:53.330739  488285 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pdh6r" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:50:55.337272  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:50:57.837685  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 10:50:35 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:35.762748835Z" level=info msg="Removed container e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp/dashboard-metrics-scraper" id=beb65dbf-b41d-4189-bc55-416cac625064 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:50:37 default-k8s-diff-port-014050 conmon[1137]: conmon 87107907b9299aea123d <ninfo>: container 1143 exited with status 1
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.755853981Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=180d312a-5fc7-4086-92c2-5dd2d99154a5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.757157476Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=638bc0b8-f387-4617-ae2d-ca18a6c9ef68 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.758657314Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7aec720d-420e-4b34-80bb-27cb8308d20a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.758915031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.763973351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.765324954Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/671e789f82649c46bdee92a1fbda134c893b826dc1bf2b4f1fda8ffefb97d414/merged/etc/passwd: no such file or directory"
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.765499495Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/671e789f82649c46bdee92a1fbda134c893b826dc1bf2b4f1fda8ffefb97d414/merged/etc/group: no such file or directory"
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.765924434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.78714951Z" level=info msg="Created container 735ad2f7c5490519a6ab1017484e2cdd0846aa4f309b74bb81418edd99ddcbd2: kube-system/storage-provisioner/storage-provisioner" id=7aec720d-420e-4b34-80bb-27cb8308d20a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.788402117Z" level=info msg="Starting container: 735ad2f7c5490519a6ab1017484e2cdd0846aa4f309b74bb81418edd99ddcbd2" id=f239283f-c40e-4648-90cc-9bd24e648269 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.791326156Z" level=info msg="Started container" PID=1646 containerID=735ad2f7c5490519a6ab1017484e2cdd0846aa4f309b74bb81418edd99ddcbd2 description=kube-system/storage-provisioner/storage-provisioner id=f239283f-c40e-4648-90cc-9bd24e648269 name=/runtime.v1.RuntimeService/StartContainer sandboxID=99f38d8bbc069fcb69ba6aa08129df3589af17598db5fb30e1b64ba96477a9a3
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.345935295Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.353096568Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.353281391Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.353355615Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.360322425Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.360528884Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.360601689Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.366479742Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.366664252Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.366785065Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.378889827Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.379098846Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	735ad2f7c5490       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   99f38d8bbc069       storage-provisioner                                    kube-system
	6405aad239fca       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago       Exited              dashboard-metrics-scraper   2                   e7cfef4205929       dashboard-metrics-scraper-6ffb444bf9-nmbsp             kubernetes-dashboard
	185dae504dec1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   00644e3758e0e       kubernetes-dashboard-855c9754f9-fj5c6                  kubernetes-dashboard
	8d8b622a022f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   4a3bfd63d948a       coredns-66bc5c9577-cs5l2                               kube-system
	3ecb17b9de9a7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   c88418d198be3       busybox                                                default
	c2c63b18b442a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   9e9b7983fc037       kube-proxy-jhf2k                                       kube-system
	ef258ac904917       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   b26e570bfff36       kindnet-j2vhl                                          kube-system
	87107907b9299       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   99f38d8bbc069       storage-provisioner                                    kube-system
	3f156e559c73a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b7c0fe977807c       kube-apiserver-default-k8s-diff-port-014050            kube-system
	caca3cf4c81ff       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   4bbb0c2f7b980       etcd-default-k8s-diff-port-014050                      kube-system
	6a8858ab03de1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   ec5c8c1a8ff82       kube-controller-manager-default-k8s-diff-port-014050   kube-system
	a30f2e6b80f40       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   69561c81a8e59       kube-scheduler-default-k8s-diff-port-014050            kube-system
	
	
	==> coredns [8d8b622a022f7eeec2e8a7f9dc8fcd0660f5f440dd391b4c90267eacedb4922f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39989 - 57084 "HINFO IN 5875581610471970698.8636234843709778040. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011759626s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-014050
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-014050
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=default-k8s-diff-port-014050
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_48_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:48:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-014050
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:50:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:50:36 +0000   Sat, 01 Nov 2025 10:48:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:50:36 +0000   Sat, 01 Nov 2025 10:48:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:50:36 +0000   Sat, 01 Nov 2025 10:48:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:50:36 +0000   Sat, 01 Nov 2025 10:49:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-014050
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                afada185-3889-484f-a7d8-6b092f3a288a
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-cs5l2                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-014050                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-j2vhl                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-014050             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-014050    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-jhf2k                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-default-k8s-diff-port-014050             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nmbsp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fj5c6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node default-k8s-diff-port-014050 event: Registered Node default-k8s-diff-port-014050 in Controller
	  Normal   NodeReady                99s                    kubelet          Node default-k8s-diff-port-014050 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node default-k8s-diff-port-014050 event: Registered Node default-k8s-diff-port-014050 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[ +42.559605] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:50] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [caca3cf4c81ffb29f4d2c8e47aa22c4b3756d0636b9899246218e95da10ca2c5] <==
	{"level":"warn","ts":"2025-11-01T10:50:03.921810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:03.941150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:03.966339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:03.985416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.001465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.018021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.041705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.065873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.088359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.109868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.130094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.145649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.162032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.179526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.201835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.221237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.243296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.266321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.286391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.300558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.322227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.367471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.395889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.441648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.584424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49314","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:51:03 up  2:33,  0 user,  load average: 3.25, 3.41, 2.83
	Linux default-k8s-diff-port-014050 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ef258ac904917d8b16125eb5674949803504b091f5afd202b51ee52257d68a8c] <==
	I1101 10:50:07.226113       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:50:07.226338       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:50:07.226456       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:50:07.227947       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:50:07.228088       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:50:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:50:07.343969       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:50:07.343999       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:50:07.344201       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:50:07.430794       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:50:37.344035       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:50:37.428505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:50:37.428505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 10:50:37.431115       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 10:50:38.744721       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:50:38.744844       1 metrics.go:72] Registering metrics
	I1101 10:50:38.744978       1 controller.go:711] "Syncing nftables rules"
	I1101 10:50:47.345289       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:50:47.345323       1 main.go:301] handling current node
	I1101 10:50:57.349201       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:50:57.349299       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3f156e559c73a53c1e70f973aee6243c1d143da20ede0269a961550635cfc68a] <==
	I1101 10:50:05.933856       1 shared_informer.go:349] "Waiting for caches to sync" controller="ipallocator-repair-controller"
	I1101 10:50:05.933864       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:50:05.967774       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:50:05.967801       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:50:06.026948       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:50:06.035806       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:50:06.035893       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1101 10:50:06.058070       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:50:06.085644       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:50:06.087933       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:50:06.091598       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:50:06.092020       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:50:06.092114       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:50:06.138585       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:50:06.401118       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:50:06.543122       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:50:06.780596       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:50:06.909372       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:50:06.971288       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:50:06.994558       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:50:07.198118       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.50.2"}
	I1101 10:50:07.231558       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.154.196"}
	I1101 10:50:09.318495       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:50:09.588832       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:50:09.618447       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6a8858ab03de1ec723c664c7147120ea9ef2a84d11e0ceb376a78665d8f48565] <==
	I1101 10:50:09.040931       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:50:09.040978       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:50:09.041005       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:50:09.041033       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:50:09.046673       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:50:09.052341       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:50:09.056718       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:50:09.056889       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:50:09.057810       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:50:09.058279       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-014050"
	I1101 10:50:09.058330       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:50:09.061512       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:50:09.061526       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:50:09.061613       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:50:09.061660       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:50:09.061600       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:50:09.061587       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:50:09.063258       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:50:09.064425       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:50:09.071992       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:50:09.072100       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:50:09.081302       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:50:09.082524       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:50:09.084754       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:50:09.088486       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [c2c63b18b442a40d362431e7e36f733ae5f127ab2f711d4c305ce4437a974ab0] <==
	I1101 10:50:07.281874       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:50:07.387740       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:50:07.514952       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:50:07.515067       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:50:07.515207       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:50:07.560525       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:50:07.560636       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:50:07.564105       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:50:07.564596       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:50:07.564649       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:50:07.569512       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:50:07.569581       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:50:07.570738       1 config.go:200] "Starting service config controller"
	I1101 10:50:07.570792       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:50:07.569647       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:50:07.570855       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:50:07.570266       1 config.go:309] "Starting node config controller"
	I1101 10:50:07.570912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:50:07.570940       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:50:07.671487       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:50:07.671612       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:50:07.671418       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a30f2e6b80f40e1d33c1f0db013b621853606390ec749bfcaa7e3fa4a17d2938] <==
	I1101 10:50:03.185826       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:50:06.528653       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:50:06.528685       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:50:06.546765       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:50:06.546801       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:50:06.546925       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:50:06.546933       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:50:06.546957       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:50:06.546964       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:50:06.547859       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:50:06.548035       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:50:06.649085       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:50:06.649152       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:50:06.649194       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:50:09 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:09.631147     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1a59c4d2-6c8a-4e52-8dd0-0fe55b16e5a8-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fj5c6\" (UID: \"1a59c4d2-6c8a-4e52-8dd0-0fe55b16e5a8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fj5c6"
	Nov 01 10:50:09 default-k8s-diff-port-014050 kubelet[776]: W1101 10:50:09.850941     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/crio-00644e3758e0ef554c22af3fe83e024c1c37c7fc66fddecee67c9ec3d4b01d07 WatchSource:0}: Error finding container 00644e3758e0ef554c22af3fe83e024c1c37c7fc66fddecee67c9ec3d4b01d07: Status 404 returned error can't find the container with id 00644e3758e0ef554c22af3fe83e024c1c37c7fc66fddecee67c9ec3d4b01d07
	Nov 01 10:50:09 default-k8s-diff-port-014050 kubelet[776]: W1101 10:50:09.851339     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/crio-e7cfef420592902ec70f9c22c9f7fdf6ab59f2a141f6a14ae262451a6fa9cdfa WatchSource:0}: Error finding container e7cfef420592902ec70f9c22c9f7fdf6ab59f2a141f6a14ae262451a6fa9cdfa: Status 404 returned error can't find the container with id e7cfef420592902ec70f9c22c9f7fdf6ab59f2a141f6a14ae262451a6fa9cdfa
	Nov 01 10:50:14 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:14.054076     776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:50:14 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:14.684875     776 scope.go:117] "RemoveContainer" containerID="02ca96f00b4f746509dbc996ce860a908280377bf1b21acd5aa7a7ca256f2ff7"
	Nov 01 10:50:15 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:15.691059     776 scope.go:117] "RemoveContainer" containerID="02ca96f00b4f746509dbc996ce860a908280377bf1b21acd5aa7a7ca256f2ff7"
	Nov 01 10:50:15 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:15.691331     776 scope.go:117] "RemoveContainer" containerID="e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0"
	Nov 01 10:50:15 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:15.691499     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:16 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:16.698619     776 scope.go:117] "RemoveContainer" containerID="e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0"
	Nov 01 10:50:16 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:16.698753     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:19 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:19.816858     776 scope.go:117] "RemoveContainer" containerID="e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0"
	Nov 01 10:50:19 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:19.817631     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:35 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:35.537111     776 scope.go:117] "RemoveContainer" containerID="e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0"
	Nov 01 10:50:35 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:35.747244     776 scope.go:117] "RemoveContainer" containerID="e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0"
	Nov 01 10:50:35 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:35.747540     776 scope.go:117] "RemoveContainer" containerID="6405aad239fcab4f5a5fd8c6b59e5e65a0f59969a2961cc877b46076e9366cf8"
	Nov 01 10:50:35 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:35.747718     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:35 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:35.771635     776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fj5c6" podStartSLOduration=17.234146057 podStartE2EDuration="26.771618538s" podCreationTimestamp="2025-11-01 10:50:09 +0000 UTC" firstStartedPulling="2025-11-01 10:50:09.855814956 +0000 UTC m=+10.540032947" lastFinishedPulling="2025-11-01 10:50:19.393287429 +0000 UTC m=+20.077505428" observedRunningTime="2025-11-01 10:50:19.724048125 +0000 UTC m=+20.408266124" watchObservedRunningTime="2025-11-01 10:50:35.771618538 +0000 UTC m=+36.455836529"
	Nov 01 10:50:37 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:37.755297     776 scope.go:117] "RemoveContainer" containerID="87107907b9299aea123d724a736202d76b246bb22d6a94bfc659f83cee018621"
	Nov 01 10:50:39 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:39.816798     776 scope.go:117] "RemoveContainer" containerID="6405aad239fcab4f5a5fd8c6b59e5e65a0f59969a2961cc877b46076e9366cf8"
	Nov 01 10:50:39 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:39.817019     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:54 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:54.537390     776 scope.go:117] "RemoveContainer" containerID="6405aad239fcab4f5a5fd8c6b59e5e65a0f59969a2961cc877b46076e9366cf8"
	Nov 01 10:50:54 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:54.538081     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:59 default-k8s-diff-port-014050 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:50:59 default-k8s-diff-port-014050 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:50:59 default-k8s-diff-port-014050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [185dae504dec1c5863268ff5c50d7e568be7f24f21e036759e0abbb319841cf8] <==
	2025/11/01 10:50:19 Using namespace: kubernetes-dashboard
	2025/11/01 10:50:19 Using in-cluster config to connect to apiserver
	2025/11/01 10:50:19 Using secret token for csrf signing
	2025/11/01 10:50:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:50:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:50:19 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:50:19 Generating JWE encryption key
	2025/11/01 10:50:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:50:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:50:19 Initializing JWE encryption key from synchronized object
	2025/11/01 10:50:19 Creating in-cluster Sidecar client
	2025/11/01 10:50:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:50:19 Serving insecurely on HTTP port: 9090
	2025/11/01 10:50:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:50:19 Starting overwatch
	
	
	==> storage-provisioner [735ad2f7c5490519a6ab1017484e2cdd0846aa4f309b74bb81418edd99ddcbd2] <==
	I1101 10:50:37.808009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:50:37.821243       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:50:37.821366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:50:37.824505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:41.280379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:45.542628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:49.140808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:52.195236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:55.217136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:55.222499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:50:55.222648       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:50:55.222852       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-014050_bc9ced56-9020-4eed-b5ab-710ca4d36e7b!
	I1101 10:50:55.223739       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56c59731-4a1e-4a0c-aa25-4af28f08f0eb", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-014050_bc9ced56-9020-4eed-b5ab-710ca4d36e7b became leader
	W1101 10:50:55.231465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:55.240676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:50:55.323173       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-014050_bc9ced56-9020-4eed-b5ab-710ca4d36e7b!
	W1101 10:50:57.245220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:57.252654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:59.256527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:59.262744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:01.266825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:01.280956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:03.285180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:03.295652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [87107907b9299aea123d724a736202d76b246bb22d6a94bfc659f83cee018621] <==
	I1101 10:50:07.305770       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:50:37.308517       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050: exit status 2 (511.862612ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-014050 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-014050
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-014050:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6",
	        "Created": "2025-11-01T10:48:10.158242588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 485449,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:49:52.808860686Z",
	            "FinishedAt": "2025-11-01T10:49:51.925169928Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/hostname",
	        "HostsPath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/hosts",
	        "LogPath": "/var/lib/docker/containers/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6-json.log",
	        "Name": "/default-k8s-diff-port-014050",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-014050:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-014050",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6",
	                "LowerDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b50c5066c8b5a70d2ca63b255654b25b4a29a8c92c50a20577ae806bfb727594/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-014050",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-014050/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-014050",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-014050",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-014050",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c5ec49f2d04d5f6cb956ea83b0a7ff625c883184f2ea07e6364951afe370475",
	            "SandboxKey": "/var/run/docker/netns/8c5ec49f2d04",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-014050": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:dd:18:0a:9f:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0f438d7bf3e688fe5caa6340faa58ea25b1a6b5b20c8ce821e7570063338cd36",
	                    "EndpointID": "2a6a8d046dc09453fe0f86ad9320c5ed565cdf16edf5b9ed56a172236d163301",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-014050",
	                        "70da30e95fce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050: exit status 2 (462.167978ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-014050 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-014050 logs -n 25: (1.824099589s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-186677 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ delete  │ -p cert-options-186677                                                                                                                                                                                                                        │ cert-options-186677          │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:45 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:45 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-245622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │                     │
	│ stop    │ -p old-k8s-version-245622 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-245622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:46 UTC │
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:47 UTC │
	│ image   │ old-k8s-version-245622 image list --format=json                                                                                                                                                                                               │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │ 01 Nov 25 10:47 UTC │
	│ pause   │ -p old-k8s-version-245622 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │                     │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p cert-expiration-308600                                                                                                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-014050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-014050 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-014050 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ stop    │ -p embed-certs-499088 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable dashboard -p embed-certs-499088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ image   │ default-k8s-diff-port-014050 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ pause   │ -p default-k8s-diff-port-014050 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:50:38
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:50:38.549998  488285 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:50:38.550733  488285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:50:38.550879  488285 out.go:374] Setting ErrFile to fd 2...
	I1101 10:50:38.550907  488285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:50:38.551677  488285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:50:38.552815  488285 out.go:368] Setting JSON to false
	I1101 10:50:38.554887  488285 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9191,"bootTime":1761985048,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:50:38.554964  488285 start.go:143] virtualization:  
	I1101 10:50:38.558075  488285 out.go:179] * [embed-certs-499088] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:50:38.561895  488285 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:50:38.562005  488285 notify.go:221] Checking for updates...
	I1101 10:50:38.568211  488285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:50:38.571234  488285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:50:38.574288  488285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:50:38.577278  488285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:50:38.580302  488285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:50:38.583841  488285 config.go:182] Loaded profile config "embed-certs-499088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:50:38.584406  488285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:50:38.613088  488285 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:50:38.613223  488285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:50:38.675905  488285 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:50:38.666767407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:50:38.676022  488285 docker.go:319] overlay module found
	I1101 10:50:38.679139  488285 out.go:179] * Using the docker driver based on existing profile
	I1101 10:50:38.681995  488285 start.go:309] selected driver: docker
	I1101 10:50:38.682019  488285 start.go:930] validating driver "docker" against &{Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:50:38.682131  488285 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:50:38.684113  488285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:50:38.753602  488285 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:50:38.744169812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:50:38.753943  488285 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:50:38.753979  488285 cni.go:84] Creating CNI manager for ""
	I1101 10:50:38.754040  488285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:50:38.754084  488285 start.go:353] cluster config:
	{Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:50:38.760374  488285 out.go:179] * Starting "embed-certs-499088" primary control-plane node in "embed-certs-499088" cluster
	I1101 10:50:38.765007  488285 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:50:38.767974  488285 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:50:38.774568  488285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:50:38.774634  488285 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:50:38.774648  488285 cache.go:59] Caching tarball of preloaded images
	I1101 10:50:38.774669  488285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:50:38.774748  488285 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:50:38.774759  488285 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:50:38.774878  488285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/config.json ...
	I1101 10:50:38.796076  488285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:50:38.796102  488285 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:50:38.796120  488285 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:50:38.796144  488285 start.go:360] acquireMachinesLock for embed-certs-499088: {Name:mk5ad922c2d628b6bdeae9b2175ff7077c575607 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:50:38.796206  488285 start.go:364] duration metric: took 38.458µs to acquireMachinesLock for "embed-certs-499088"
	I1101 10:50:38.796232  488285 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:50:38.796238  488285 fix.go:54] fixHost starting: 
	I1101 10:50:38.796499  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:50:38.815060  488285 fix.go:112] recreateIfNeeded on embed-certs-499088: state=Stopped err=<nil>
	W1101 10:50:38.815090  488285 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:50:38.057492  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	W1101 10:50:40.551501  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	I1101 10:50:38.818020  488285 out.go:252] * Restarting existing docker container for "embed-certs-499088" ...
	I1101 10:50:38.818114  488285 cli_runner.go:164] Run: docker start embed-certs-499088
	I1101 10:50:39.164453  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:50:39.184448  488285 kic.go:430] container "embed-certs-499088" state is running.
	I1101 10:50:39.184853  488285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-499088
	I1101 10:50:39.208443  488285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/config.json ...
	I1101 10:50:39.208683  488285 machine.go:94] provisionDockerMachine start ...
	I1101 10:50:39.208754  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:39.232194  488285 main.go:143] libmachine: Using SSH client type: native
	I1101 10:50:39.232774  488285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1101 10:50:39.232791  488285 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:50:39.233627  488285 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:50:42.392682  488285 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-499088
	
	I1101 10:50:42.392719  488285 ubuntu.go:182] provisioning hostname "embed-certs-499088"
	I1101 10:50:42.392795  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:42.411745  488285 main.go:143] libmachine: Using SSH client type: native
	I1101 10:50:42.412089  488285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1101 10:50:42.412106  488285 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-499088 && echo "embed-certs-499088" | sudo tee /etc/hostname
	I1101 10:50:42.574692  488285 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-499088
	
	I1101 10:50:42.574785  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:42.591738  488285 main.go:143] libmachine: Using SSH client type: native
	I1101 10:50:42.592040  488285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1101 10:50:42.592062  488285 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-499088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-499088/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-499088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:50:42.741375  488285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:50:42.741401  488285 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:50:42.741423  488285 ubuntu.go:190] setting up certificates
	I1101 10:50:42.741439  488285 provision.go:84] configureAuth start
	I1101 10:50:42.741499  488285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-499088
	I1101 10:50:42.758404  488285 provision.go:143] copyHostCerts
	I1101 10:50:42.758479  488285 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:50:42.758501  488285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:50:42.758579  488285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:50:42.758688  488285 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:50:42.758707  488285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:50:42.758735  488285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:50:42.758801  488285 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:50:42.758806  488285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:50:42.758830  488285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:50:42.758886  488285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.embed-certs-499088 san=[127.0.0.1 192.168.76.2 embed-certs-499088 localhost minikube]
	I1101 10:50:42.931353  488285 provision.go:177] copyRemoteCerts
	I1101 10:50:42.931428  488285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:50:42.931467  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:42.952259  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:43.069072  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:50:43.090198  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 10:50:43.110553  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:50:43.130331  488285 provision.go:87] duration metric: took 388.86823ms to configureAuth
	I1101 10:50:43.130361  488285 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:50:43.130558  488285 config.go:182] Loaded profile config "embed-certs-499088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:50:43.130664  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:43.148285  488285 main.go:143] libmachine: Using SSH client type: native
	I1101 10:50:43.148599  488285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1101 10:50:43.148618  488285 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:50:43.481474  488285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:50:43.481499  488285 machine.go:97] duration metric: took 4.272798667s to provisionDockerMachine
	I1101 10:50:43.481510  488285 start.go:293] postStartSetup for "embed-certs-499088" (driver="docker")
	I1101 10:50:43.481521  488285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:50:43.481618  488285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:50:43.481664  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:43.500978  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:43.609619  488285 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:50:43.613371  488285 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:50:43.613403  488285 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:50:43.613415  488285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:50:43.613473  488285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:50:43.613555  488285 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:50:43.613661  488285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:50:43.621731  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:50:43.640961  488285 start.go:296] duration metric: took 159.435089ms for postStartSetup
	I1101 10:50:43.641095  488285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:50:43.641173  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:43.658498  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:43.758021  488285 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:50:43.762870  488285 fix.go:56] duration metric: took 4.966625241s for fixHost
	I1101 10:50:43.762895  488285 start.go:83] releasing machines lock for "embed-certs-499088", held for 4.966674374s
	I1101 10:50:43.762972  488285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-499088
	I1101 10:50:43.779802  488285 ssh_runner.go:195] Run: cat /version.json
	I1101 10:50:43.779863  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:43.780131  488285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:50:43.780188  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:43.802260  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:43.814524  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:43.908603  488285 ssh_runner.go:195] Run: systemctl --version
	I1101 10:50:44.010009  488285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:50:44.057051  488285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:50:44.062205  488285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:50:44.062345  488285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:50:44.071919  488285 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:50:44.071944  488285 start.go:496] detecting cgroup driver to use...
	I1101 10:50:44.071977  488285 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:50:44.072045  488285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:50:44.094313  488285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:50:44.110089  488285 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:50:44.110169  488285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:50:44.131878  488285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:50:44.145089  488285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:50:44.264672  488285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:50:44.387081  488285 docker.go:234] disabling docker service ...
	I1101 10:50:44.387180  488285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:50:44.402847  488285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:50:44.417055  488285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:50:44.537563  488285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:50:44.682650  488285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:50:44.696376  488285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:50:44.714166  488285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:50:44.714288  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.723731  488285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:50:44.723830  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.732859  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.741864  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.750838  488285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:50:44.758966  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.767631  488285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.776715  488285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:50:44.785960  488285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:50:44.793489  488285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:50:44.801964  488285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:50:44.925705  488285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:50:45.088150  488285 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:50:45.088328  488285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:50:45.095306  488285 start.go:564] Will wait 60s for crictl version
	I1101 10:50:45.095488  488285 ssh_runner.go:195] Run: which crictl
	I1101 10:50:45.107278  488285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:50:45.141128  488285 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:50:45.141234  488285 ssh_runner.go:195] Run: crio --version
	I1101 10:50:45.199032  488285 ssh_runner.go:195] Run: crio --version
	I1101 10:50:45.262731  488285 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1101 10:50:43.052250  485320 pod_ready.go:104] pod "coredns-66bc5c9577-cs5l2" is not "Ready", error: <nil>
	I1101 10:50:44.550344  485320 pod_ready.go:94] pod "coredns-66bc5c9577-cs5l2" is "Ready"
	I1101 10:50:44.550374  485320 pod_ready.go:86] duration metric: took 37.005472163s for pod "coredns-66bc5c9577-cs5l2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.552884  485320 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.557868  485320 pod_ready.go:94] pod "etcd-default-k8s-diff-port-014050" is "Ready"
	I1101 10:50:44.557894  485320 pod_ready.go:86] duration metric: took 4.98563ms for pod "etcd-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.560518  485320 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.565049  485320 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-014050" is "Ready"
	I1101 10:50:44.565077  485320 pod_ready.go:86] duration metric: took 4.532957ms for pod "kube-apiserver-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.567233  485320 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.748904  485320 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-014050" is "Ready"
	I1101 10:50:44.749029  485320 pod_ready.go:86] duration metric: took 181.730345ms for pod "kube-controller-manager-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:44.948273  485320 pod_ready.go:83] waiting for pod "kube-proxy-jhf2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:45.350400  485320 pod_ready.go:94] pod "kube-proxy-jhf2k" is "Ready"
	I1101 10:50:45.350437  485320 pod_ready.go:86] duration metric: took 402.134626ms for pod "kube-proxy-jhf2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:45.549753  485320 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:45.949422  485320 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-014050" is "Ready"
	I1101 10:50:45.949446  485320 pod_ready.go:86] duration metric: took 399.663333ms for pod "kube-scheduler-default-k8s-diff-port-014050" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:50:45.949458  485320 pod_ready.go:40] duration metric: took 38.409530276s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:50:46.060128  485320 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:50:46.065091  485320 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-014050" cluster and "default" namespace by default
	I1101 10:50:45.271958  488285 cli_runner.go:164] Run: docker network inspect embed-certs-499088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:50:45.295251  488285 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:50:45.300944  488285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:50:45.314777  488285 kubeadm.go:884] updating cluster {Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:50:45.314919  488285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:50:45.315002  488285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:50:45.383915  488285 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:50:45.383946  488285 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:50:45.384012  488285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:50:45.424284  488285 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:50:45.424312  488285 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:50:45.424320  488285 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:50:45.424482  488285 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-499088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:50:45.424601  488285 ssh_runner.go:195] Run: crio config
	I1101 10:50:45.504287  488285 cni.go:84] Creating CNI manager for ""
	I1101 10:50:45.504310  488285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:50:45.504328  488285 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:50:45.504377  488285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-499088 NodeName:embed-certs-499088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:50:45.504549  488285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-499088"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:50:45.504664  488285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:50:45.513193  488285 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:50:45.513292  488285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:50:45.521380  488285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 10:50:45.535953  488285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:50:45.553062  488285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 10:50:45.567879  488285 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:50:45.571548  488285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:50:45.581948  488285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:50:45.702405  488285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:50:45.720994  488285 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088 for IP: 192.168.76.2
	I1101 10:50:45.721074  488285 certs.go:195] generating shared ca certs ...
	I1101 10:50:45.721116  488285 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:50:45.721293  488285 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:50:45.721388  488285 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:50:45.721431  488285 certs.go:257] generating profile certs ...
	I1101 10:50:45.721548  488285 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/client.key
	I1101 10:50:45.721645  488285 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.key.ee4ebe0a
	I1101 10:50:45.721709  488285 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.key
	I1101 10:50:45.721850  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:50:45.721909  488285 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:50:45.721942  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:50:45.721998  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:50:45.722048  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:50:45.722092  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:50:45.722159  488285 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:50:45.722863  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:50:45.756525  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:50:45.784717  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:50:45.808069  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:50:45.827699  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:50:45.849696  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:50:45.882215  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:50:45.909989  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/embed-certs-499088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:50:45.932538  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:50:45.966980  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:50:45.991720  488285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:50:46.012132  488285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:50:46.026685  488285 ssh_runner.go:195] Run: openssl version
	I1101 10:50:46.034206  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:50:46.049726  488285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:50:46.055855  488285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:50:46.055996  488285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:50:46.134344  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:50:46.144521  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:50:46.154134  488285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:50:46.159177  488285 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:50:46.159245  488285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:50:46.206851  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:50:46.217337  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:50:46.231816  488285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:50:46.236855  488285 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:50:46.237342  488285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:50:46.288488  488285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:50:46.298986  488285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:50:46.304312  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:50:46.347905  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:50:46.437169  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:50:46.552391  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:50:46.622128  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:50:46.716962  488285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:50:46.816487  488285 kubeadm.go:401] StartCluster: {Name:embed-certs-499088 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-499088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:50:46.816579  488285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:50:46.816655  488285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:50:46.858949  488285 cri.go:89] found id: "0de30b77d1ca10da59b96521a28d795e3e2f58d2bf5933e2fc6be1269644272f"
	I1101 10:50:46.858969  488285 cri.go:89] found id: "a312b63badfe91286205ab3f2506b1f28b4e42298c8d0022b0e1c17bcddc1e12"
	I1101 10:50:46.858974  488285 cri.go:89] found id: "0ef612cf67931e99b0ff0b2cd78a42bcb290e5834448357a04f331cca1ab13cc"
	I1101 10:50:46.858979  488285 cri.go:89] found id: "59e8eb3202b226a9242a2418d10ad312d3fe21ba3c8163fbf7bfede124b48607"
	I1101 10:50:46.858982  488285 cri.go:89] found id: ""
	I1101 10:50:46.859036  488285 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:50:46.877243  488285 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:50:46Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:50:46.877337  488285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:50:46.889039  488285 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:50:46.889056  488285 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:50:46.889111  488285 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:50:46.901661  488285 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:50:46.902197  488285 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-499088" does not appear in /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:50:46.902429  488285 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-292445/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-499088" cluster setting kubeconfig missing "embed-certs-499088" context setting]
	I1101 10:50:46.902893  488285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:50:46.904511  488285 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:50:46.915462  488285 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:50:46.915538  488285 kubeadm.go:602] duration metric: took 26.475948ms to restartPrimaryControlPlane
	I1101 10:50:46.915561  488285 kubeadm.go:403] duration metric: took 99.084823ms to StartCluster
	I1101 10:50:46.915610  488285 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:50:46.915697  488285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:50:46.917050  488285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:50:46.917612  488285 config.go:182] Loaded profile config "embed-certs-499088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:50:46.917754  488285 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:50:46.917839  488285 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-499088"
	I1101 10:50:46.917858  488285 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-499088"
	W1101 10:50:46.917864  488285 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:50:46.917887  488285 host.go:66] Checking if "embed-certs-499088" exists ...
	I1101 10:50:46.918428  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:50:46.918592  488285 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:50:46.919056  488285 addons.go:70] Setting default-storageclass=true in profile "embed-certs-499088"
	I1101 10:50:46.919078  488285 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-499088"
	I1101 10:50:46.919310  488285 addons.go:70] Setting dashboard=true in profile "embed-certs-499088"
	I1101 10:50:46.919345  488285 addons.go:239] Setting addon dashboard=true in "embed-certs-499088"
	I1101 10:50:46.919349  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	W1101 10:50:46.919352  488285 addons.go:248] addon dashboard should already be in state true
	I1101 10:50:46.919377  488285 host.go:66] Checking if "embed-certs-499088" exists ...
	I1101 10:50:46.919953  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:50:46.924531  488285 out.go:179] * Verifying Kubernetes components...
	I1101 10:50:46.927521  488285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:50:47.002395  488285 addons.go:239] Setting addon default-storageclass=true in "embed-certs-499088"
	W1101 10:50:47.002427  488285 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:50:47.002455  488285 host.go:66] Checking if "embed-certs-499088" exists ...
	I1101 10:50:47.002920  488285 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:50:47.003051  488285 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:50:47.007191  488285 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:50:47.007215  488285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:50:47.007294  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:47.017236  488285 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:50:47.025814  488285 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:50:47.033096  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:50:47.033125  488285 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:50:47.033203  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:47.062589  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:47.068635  488285 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:50:47.068663  488285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:50:47.068726  488285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:50:47.082069  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:47.107309  488285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:50:47.299759  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:50:47.299839  488285 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:50:47.346567  488285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:50:47.357815  488285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:50:47.368367  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:50:47.368392  488285 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:50:47.431418  488285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:50:47.438479  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:50:47.438552  488285 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:50:47.509622  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:50:47.509646  488285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:50:47.602280  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:50:47.602305  488285 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:50:47.672209  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:50:47.672234  488285 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:50:47.695472  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:50:47.695499  488285 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:50:47.716882  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:50:47.716907  488285 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:50:47.737448  488285 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:50:47.737481  488285 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:50:47.760742  488285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:50:51.363797  488285 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.00589875s)
	I1101 10:50:51.363852  488285 node_ready.go:35] waiting up to 6m0s for node "embed-certs-499088" to be "Ready" ...
	I1101 10:50:51.363929  488285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.01725394s)
	I1101 10:50:51.392228  488285 node_ready.go:49] node "embed-certs-499088" is "Ready"
	I1101 10:50:51.392258  488285 node_ready.go:38] duration metric: took 28.366517ms for node "embed-certs-499088" to be "Ready" ...
	I1101 10:50:51.392272  488285 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:50:51.392373  488285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:50:52.774878  488285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.343374731s)
	I1101 10:50:52.774993  488285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.014220126s)
	I1101 10:50:52.775137  488285 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.382748528s)
	I1101 10:50:52.775157  488285 api_server.go:72] duration metric: took 5.85651804s to wait for apiserver process to appear ...
	I1101 10:50:52.775163  488285 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:50:52.775179  488285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:50:52.778080  488285 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-499088 addons enable metrics-server
	
	I1101 10:50:52.781434  488285 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1101 10:50:52.784524  488285 addons.go:515] duration metric: took 5.866740158s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1101 10:50:52.785190  488285 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:50:52.785212  488285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:50:53.275859  488285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:50:53.288761  488285 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:50:53.290516  488285 api_server.go:141] control plane version: v1.34.1
	I1101 10:50:53.290541  488285 api_server.go:131] duration metric: took 515.371866ms to wait for apiserver health ...
	I1101 10:50:53.290549  488285 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:50:53.293715  488285 system_pods.go:59] 8 kube-system pods found
	I1101 10:50:53.293839  488285 system_pods.go:61] "coredns-66bc5c9577-pdh6r" [5b76d194-6689-4f01-aa5d-c2d0b63808ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:53.293877  488285 system_pods.go:61] "etcd-embed-certs-499088" [2096f7af-f76e-4736-b77a-60c61146d542] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:50:53.293916  488285 system_pods.go:61] "kindnet-9sr9j" [a24caca1-3f4b-4d34-b663-c58a152bfa02] Running
	I1101 10:50:53.293944  488285 system_pods.go:61] "kube-apiserver-embed-certs-499088" [599d58e4-1782-4266-bc1e-0eda23f68ed9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:50:53.293969  488285 system_pods.go:61] "kube-controller-manager-embed-certs-499088" [1cacce4c-3b57-4821-9a97-123186b7a8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:50:53.294004  488285 system_pods.go:61] "kube-proxy-dqf86" [92677bfa-cc3f-4940-89f9-23d383e5dba9] Running
	I1101 10:50:53.294035  488285 system_pods.go:61] "kube-scheduler-embed-certs-499088" [3c20f3ae-0d1a-440e-86ae-4f691c6988cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:50:53.294066  488285 system_pods.go:61] "storage-provisioner" [5678aab9-c0e9-46c3-929c-04fd8bcc56db] Running
	I1101 10:50:53.294106  488285 system_pods.go:74] duration metric: took 3.549646ms to wait for pod list to return data ...
	I1101 10:50:53.294127  488285 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:50:53.296630  488285 default_sa.go:45] found service account: "default"
	I1101 10:50:53.296697  488285 default_sa.go:55] duration metric: took 2.547839ms for default service account to be created ...
	I1101 10:50:53.296722  488285 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:50:53.299825  488285 system_pods.go:86] 8 kube-system pods found
	I1101 10:50:53.299932  488285 system_pods.go:89] "coredns-66bc5c9577-pdh6r" [5b76d194-6689-4f01-aa5d-c2d0b63808ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:53.299974  488285 system_pods.go:89] "etcd-embed-certs-499088" [2096f7af-f76e-4736-b77a-60c61146d542] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:50:53.300000  488285 system_pods.go:89] "kindnet-9sr9j" [a24caca1-3f4b-4d34-b663-c58a152bfa02] Running
	I1101 10:50:53.300025  488285 system_pods.go:89] "kube-apiserver-embed-certs-499088" [599d58e4-1782-4266-bc1e-0eda23f68ed9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:50:53.300071  488285 system_pods.go:89] "kube-controller-manager-embed-certs-499088" [1cacce4c-3b57-4821-9a97-123186b7a8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:50:53.300098  488285 system_pods.go:89] "kube-proxy-dqf86" [92677bfa-cc3f-4940-89f9-23d383e5dba9] Running
	I1101 10:50:53.300123  488285 system_pods.go:89] "kube-scheduler-embed-certs-499088" [3c20f3ae-0d1a-440e-86ae-4f691c6988cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:50:53.300162  488285 system_pods.go:89] "storage-provisioner" [5678aab9-c0e9-46c3-929c-04fd8bcc56db] Running
	I1101 10:50:53.300189  488285 system_pods.go:126] duration metric: took 3.448442ms to wait for k8s-apps to be running ...
	I1101 10:50:53.300211  488285 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:50:53.300298  488285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:50:53.315857  488285 system_svc.go:56] duration metric: took 15.636297ms WaitForService to wait for kubelet
	I1101 10:50:53.315932  488285 kubeadm.go:587] duration metric: took 6.397291546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:50:53.315986  488285 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:50:53.320337  488285 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:50:53.320369  488285 node_conditions.go:123] node cpu capacity is 2
	I1101 10:50:53.320382  488285 node_conditions.go:105] duration metric: took 4.37792ms to run NodePressure ...
	I1101 10:50:53.320395  488285 start.go:242] waiting for startup goroutines ...
	I1101 10:50:53.320403  488285 start.go:247] waiting for cluster config update ...
	I1101 10:50:53.320414  488285 start.go:256] writing updated cluster config ...
	I1101 10:50:53.320677  488285 ssh_runner.go:195] Run: rm -f paused
	I1101 10:50:53.326556  488285 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:50:53.330739  488285 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pdh6r" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:50:55.337272  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:50:57.837685  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:51:00.373556  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:51:02.838014  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 10:50:35 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:35.762748835Z" level=info msg="Removed container e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp/dashboard-metrics-scraper" id=beb65dbf-b41d-4189-bc55-416cac625064 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:50:37 default-k8s-diff-port-014050 conmon[1137]: conmon 87107907b9299aea123d <ninfo>: container 1143 exited with status 1
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.755853981Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=180d312a-5fc7-4086-92c2-5dd2d99154a5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.757157476Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=638bc0b8-f387-4617-ae2d-ca18a6c9ef68 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.758657314Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7aec720d-420e-4b34-80bb-27cb8308d20a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.758915031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.763973351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.765324954Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/671e789f82649c46bdee92a1fbda134c893b826dc1bf2b4f1fda8ffefb97d414/merged/etc/passwd: no such file or directory"
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.765499495Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/671e789f82649c46bdee92a1fbda134c893b826dc1bf2b4f1fda8ffefb97d414/merged/etc/group: no such file or directory"
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.765924434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.78714951Z" level=info msg="Created container 735ad2f7c5490519a6ab1017484e2cdd0846aa4f309b74bb81418edd99ddcbd2: kube-system/storage-provisioner/storage-provisioner" id=7aec720d-420e-4b34-80bb-27cb8308d20a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.788402117Z" level=info msg="Starting container: 735ad2f7c5490519a6ab1017484e2cdd0846aa4f309b74bb81418edd99ddcbd2" id=f239283f-c40e-4648-90cc-9bd24e648269 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:50:37 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:37.791326156Z" level=info msg="Started container" PID=1646 containerID=735ad2f7c5490519a6ab1017484e2cdd0846aa4f309b74bb81418edd99ddcbd2 description=kube-system/storage-provisioner/storage-provisioner id=f239283f-c40e-4648-90cc-9bd24e648269 name=/runtime.v1.RuntimeService/StartContainer sandboxID=99f38d8bbc069fcb69ba6aa08129df3589af17598db5fb30e1b64ba96477a9a3
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.345935295Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.353096568Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.353281391Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.353355615Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.360322425Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.360528884Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.360601689Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.366479742Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.366664252Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.366785065Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.378889827Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:50:47 default-k8s-diff-port-014050 crio[653]: time="2025-11-01T10:50:47.379098846Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	735ad2f7c5490       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   99f38d8bbc069       storage-provisioner                                    kube-system
	6405aad239fca       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   e7cfef4205929       dashboard-metrics-scraper-6ffb444bf9-nmbsp             kubernetes-dashboard
	185dae504dec1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   00644e3758e0e       kubernetes-dashboard-855c9754f9-fj5c6                  kubernetes-dashboard
	8d8b622a022f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   4a3bfd63d948a       coredns-66bc5c9577-cs5l2                               kube-system
	3ecb17b9de9a7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   c88418d198be3       busybox                                                default
	c2c63b18b442a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   9e9b7983fc037       kube-proxy-jhf2k                                       kube-system
	ef258ac904917       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   b26e570bfff36       kindnet-j2vhl                                          kube-system
	87107907b9299       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   99f38d8bbc069       storage-provisioner                                    kube-system
	3f156e559c73a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b7c0fe977807c       kube-apiserver-default-k8s-diff-port-014050            kube-system
	caca3cf4c81ff       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   4bbb0c2f7b980       etcd-default-k8s-diff-port-014050                      kube-system
	6a8858ab03de1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   ec5c8c1a8ff82       kube-controller-manager-default-k8s-diff-port-014050   kube-system
	a30f2e6b80f40       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   69561c81a8e59       kube-scheduler-default-k8s-diff-port-014050            kube-system
	
	
	==> coredns [8d8b622a022f7eeec2e8a7f9dc8fcd0660f5f440dd391b4c90267eacedb4922f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39989 - 57084 "HINFO IN 5875581610471970698.8636234843709778040. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011759626s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-014050
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-014050
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=default-k8s-diff-port-014050
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_48_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:48:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-014050
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:50:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:50:36 +0000   Sat, 01 Nov 2025 10:48:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:50:36 +0000   Sat, 01 Nov 2025 10:48:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:50:36 +0000   Sat, 01 Nov 2025 10:48:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:50:36 +0000   Sat, 01 Nov 2025 10:49:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-014050
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                afada185-3889-484f-a7d8-6b092f3a288a
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-cs5l2                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-default-k8s-diff-port-014050                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-j2vhl                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-014050             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-014050    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-jhf2k                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-014050             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nmbsp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fj5c6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   Starting                 2m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m24s                  node-controller  Node default-k8s-diff-port-014050 event: Registered Node default-k8s-diff-port-014050 in Controller
	  Normal   NodeReady                102s                   kubelet          Node default-k8s-diff-port-014050 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-014050 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                    node-controller  Node default-k8s-diff-port-014050 event: Registered Node default-k8s-diff-port-014050 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[ +42.559605] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:50] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [caca3cf4c81ffb29f4d2c8e47aa22c4b3756d0636b9899246218e95da10ca2c5] <==
	{"level":"warn","ts":"2025-11-01T10:50:03.921810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:03.941150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:03.966339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:03.985416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.001465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.018021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.041705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.065873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.088359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.109868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.130094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.145649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.162032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.179526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.201835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.221237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.243296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.266321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.286391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.300558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.322227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.367471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.395889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.441648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:04.584424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49314","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:51:06 up  2:33,  0 user,  load average: 3.39, 3.43, 2.84
	Linux default-k8s-diff-port-014050 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ef258ac904917d8b16125eb5674949803504b091f5afd202b51ee52257d68a8c] <==
	I1101 10:50:07.226113       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:50:07.226338       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:50:07.226456       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:50:07.227947       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:50:07.228088       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:50:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:50:07.343969       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:50:07.343999       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:50:07.344201       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:50:07.430794       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:50:37.344035       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:50:37.428505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:50:37.428505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 10:50:37.431115       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 10:50:38.744721       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:50:38.744844       1 metrics.go:72] Registering metrics
	I1101 10:50:38.744978       1 controller.go:711] "Syncing nftables rules"
	I1101 10:50:47.345289       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:50:47.345323       1 main.go:301] handling current node
	I1101 10:50:57.349201       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:50:57.349299       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3f156e559c73a53c1e70f973aee6243c1d143da20ede0269a961550635cfc68a] <==
	I1101 10:50:05.933856       1 shared_informer.go:349] "Waiting for caches to sync" controller="ipallocator-repair-controller"
	I1101 10:50:05.933864       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:50:05.967774       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:50:05.967801       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:50:06.026948       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:50:06.035806       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:50:06.035893       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1101 10:50:06.058070       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:50:06.085644       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:50:06.087933       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:50:06.091598       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:50:06.092020       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:50:06.092114       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:50:06.138585       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:50:06.401118       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:50:06.543122       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:50:06.780596       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:50:06.909372       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:50:06.971288       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:50:06.994558       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:50:07.198118       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.50.2"}
	I1101 10:50:07.231558       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.154.196"}
	I1101 10:50:09.318495       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:50:09.588832       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:50:09.618447       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6a8858ab03de1ec723c664c7147120ea9ef2a84d11e0ceb376a78665d8f48565] <==
	I1101 10:50:09.040931       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:50:09.040978       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:50:09.041005       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:50:09.041033       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:50:09.046673       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:50:09.052341       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:50:09.056718       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:50:09.056889       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:50:09.057810       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:50:09.058279       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-014050"
	I1101 10:50:09.058330       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:50:09.061512       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:50:09.061526       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:50:09.061613       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:50:09.061660       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:50:09.061600       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:50:09.061587       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:50:09.063258       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:50:09.064425       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:50:09.071992       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:50:09.072100       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:50:09.081302       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:50:09.082524       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:50:09.084754       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:50:09.088486       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [c2c63b18b442a40d362431e7e36f733ae5f127ab2f711d4c305ce4437a974ab0] <==
	I1101 10:50:07.281874       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:50:07.387740       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:50:07.514952       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:50:07.515067       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:50:07.515207       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:50:07.560525       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:50:07.560636       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:50:07.564105       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:50:07.564596       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:50:07.564649       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:50:07.569512       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:50:07.569581       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:50:07.570738       1 config.go:200] "Starting service config controller"
	I1101 10:50:07.570792       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:50:07.569647       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:50:07.570855       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:50:07.570266       1 config.go:309] "Starting node config controller"
	I1101 10:50:07.570912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:50:07.570940       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:50:07.671487       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:50:07.671612       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:50:07.671418       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a30f2e6b80f40e1d33c1f0db013b621853606390ec749bfcaa7e3fa4a17d2938] <==
	I1101 10:50:03.185826       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:50:06.528653       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:50:06.528685       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:50:06.546765       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:50:06.546801       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:50:06.546925       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:50:06.546933       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:50:06.546957       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:50:06.546964       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:50:06.547859       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:50:06.548035       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:50:06.649085       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:50:06.649152       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:50:06.649194       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:50:09 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:09.631147     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1a59c4d2-6c8a-4e52-8dd0-0fe55b16e5a8-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fj5c6\" (UID: \"1a59c4d2-6c8a-4e52-8dd0-0fe55b16e5a8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fj5c6"
	Nov 01 10:50:09 default-k8s-diff-port-014050 kubelet[776]: W1101 10:50:09.850941     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/crio-00644e3758e0ef554c22af3fe83e024c1c37c7fc66fddecee67c9ec3d4b01d07 WatchSource:0}: Error finding container 00644e3758e0ef554c22af3fe83e024c1c37c7fc66fddecee67c9ec3d4b01d07: Status 404 returned error can't find the container with id 00644e3758e0ef554c22af3fe83e024c1c37c7fc66fddecee67c9ec3d4b01d07
	Nov 01 10:50:09 default-k8s-diff-port-014050 kubelet[776]: W1101 10:50:09.851339     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70da30e95fcee12fe06a15aaa295ac36375b1468702af60500027e29b14fc0f6/crio-e7cfef420592902ec70f9c22c9f7fdf6ab59f2a141f6a14ae262451a6fa9cdfa WatchSource:0}: Error finding container e7cfef420592902ec70f9c22c9f7fdf6ab59f2a141f6a14ae262451a6fa9cdfa: Status 404 returned error can't find the container with id e7cfef420592902ec70f9c22c9f7fdf6ab59f2a141f6a14ae262451a6fa9cdfa
	Nov 01 10:50:14 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:14.054076     776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:50:14 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:14.684875     776 scope.go:117] "RemoveContainer" containerID="02ca96f00b4f746509dbc996ce860a908280377bf1b21acd5aa7a7ca256f2ff7"
	Nov 01 10:50:15 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:15.691059     776 scope.go:117] "RemoveContainer" containerID="02ca96f00b4f746509dbc996ce860a908280377bf1b21acd5aa7a7ca256f2ff7"
	Nov 01 10:50:15 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:15.691331     776 scope.go:117] "RemoveContainer" containerID="e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0"
	Nov 01 10:50:15 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:15.691499     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:16 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:16.698619     776 scope.go:117] "RemoveContainer" containerID="e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0"
	Nov 01 10:50:16 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:16.698753     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:19 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:19.816858     776 scope.go:117] "RemoveContainer" containerID="e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0"
	Nov 01 10:50:19 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:19.817631     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:35 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:35.537111     776 scope.go:117] "RemoveContainer" containerID="e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0"
	Nov 01 10:50:35 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:35.747244     776 scope.go:117] "RemoveContainer" containerID="e4957d153df2ff082309fe3bd0bb31f7261e42dc2510e0da7a054486164d3de0"
	Nov 01 10:50:35 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:35.747540     776 scope.go:117] "RemoveContainer" containerID="6405aad239fcab4f5a5fd8c6b59e5e65a0f59969a2961cc877b46076e9366cf8"
	Nov 01 10:50:35 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:35.747718     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:35 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:35.771635     776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fj5c6" podStartSLOduration=17.234146057 podStartE2EDuration="26.771618538s" podCreationTimestamp="2025-11-01 10:50:09 +0000 UTC" firstStartedPulling="2025-11-01 10:50:09.855814956 +0000 UTC m=+10.540032947" lastFinishedPulling="2025-11-01 10:50:19.393287429 +0000 UTC m=+20.077505428" observedRunningTime="2025-11-01 10:50:19.724048125 +0000 UTC m=+20.408266124" watchObservedRunningTime="2025-11-01 10:50:35.771618538 +0000 UTC m=+36.455836529"
	Nov 01 10:50:37 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:37.755297     776 scope.go:117] "RemoveContainer" containerID="87107907b9299aea123d724a736202d76b246bb22d6a94bfc659f83cee018621"
	Nov 01 10:50:39 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:39.816798     776 scope.go:117] "RemoveContainer" containerID="6405aad239fcab4f5a5fd8c6b59e5e65a0f59969a2961cc877b46076e9366cf8"
	Nov 01 10:50:39 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:39.817019     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:54 default-k8s-diff-port-014050 kubelet[776]: I1101 10:50:54.537390     776 scope.go:117] "RemoveContainer" containerID="6405aad239fcab4f5a5fd8c6b59e5e65a0f59969a2961cc877b46076e9366cf8"
	Nov 01 10:50:54 default-k8s-diff-port-014050 kubelet[776]: E1101 10:50:54.538081     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nmbsp_kubernetes-dashboard(c170f8b8-d740-4061-86fe-bc1961d37492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nmbsp" podUID="c170f8b8-d740-4061-86fe-bc1961d37492"
	Nov 01 10:50:59 default-k8s-diff-port-014050 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:50:59 default-k8s-diff-port-014050 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:50:59 default-k8s-diff-port-014050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [185dae504dec1c5863268ff5c50d7e568be7f24f21e036759e0abbb319841cf8] <==
	2025/11/01 10:50:19 Using namespace: kubernetes-dashboard
	2025/11/01 10:50:19 Using in-cluster config to connect to apiserver
	2025/11/01 10:50:19 Using secret token for csrf signing
	2025/11/01 10:50:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:50:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:50:19 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:50:19 Generating JWE encryption key
	2025/11/01 10:50:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:50:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:50:19 Initializing JWE encryption key from synchronized object
	2025/11/01 10:50:19 Creating in-cluster Sidecar client
	2025/11/01 10:50:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:50:19 Serving insecurely on HTTP port: 9090
	2025/11/01 10:50:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:50:19 Starting overwatch
	
	
	==> storage-provisioner [735ad2f7c5490519a6ab1017484e2cdd0846aa4f309b74bb81418edd99ddcbd2] <==
	I1101 10:50:37.821243       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:50:37.821366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:50:37.824505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:41.280379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:45.542628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:49.140808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:52.195236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:55.217136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:55.222499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:50:55.222648       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:50:55.222852       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-014050_bc9ced56-9020-4eed-b5ab-710ca4d36e7b!
	I1101 10:50:55.223739       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56c59731-4a1e-4a0c-aa25-4af28f08f0eb", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-014050_bc9ced56-9020-4eed-b5ab-710ca4d36e7b became leader
	W1101 10:50:55.231465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:55.240676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:50:55.323173       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-014050_bc9ced56-9020-4eed-b5ab-710ca4d36e7b!
	W1101 10:50:57.245220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:57.252654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:59.256527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:50:59.262744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:01.266825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:01.280956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:03.285180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:03.295652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:05.299325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:05.307300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [87107907b9299aea123d724a736202d76b246bb22d6a94bfc659f83cee018621] <==
	I1101 10:50:07.305770       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:50:37.308517       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050: exit status 2 (465.583788ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-014050 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-499088 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-499088 --alsologtostderr -v=1: exit status 80 (2.366660161s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-499088 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:51:38.768768  494457 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:38.768965  494457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:38.768972  494457 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:38.768977  494457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:38.769240  494457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:51:38.769531  494457 out.go:368] Setting JSON to false
	I1101 10:51:38.769551  494457 mustload.go:66] Loading cluster: embed-certs-499088
	I1101 10:51:38.769937  494457 config.go:182] Loaded profile config "embed-certs-499088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:38.770480  494457 cli_runner.go:164] Run: docker container inspect embed-certs-499088 --format={{.State.Status}}
	I1101 10:51:38.798443  494457 host.go:66] Checking if "embed-certs-499088" exists ...
	I1101 10:51:38.798758  494457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:51:38.890833  494457 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:78 SystemTime:2025-11-01 10:51:38.881329125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:51:38.891491  494457 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-499088 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:51:38.895145  494457 out.go:179] * Pausing node embed-certs-499088 ... 
	I1101 10:51:38.899113  494457 host.go:66] Checking if "embed-certs-499088" exists ...
	I1101 10:51:38.899458  494457 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:38.899511  494457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-499088
	I1101 10:51:38.926592  494457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/embed-certs-499088/id_rsa Username:docker}
	I1101 10:51:39.037713  494457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:51:39.054595  494457 pause.go:52] kubelet running: true
	I1101 10:51:39.054674  494457 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:51:39.393012  494457 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:51:39.393191  494457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:51:39.476836  494457 cri.go:89] found id: "7440e8684eb54b0b34c9156026be71a3e8ae7887c9adbe4447104c66b78e0d01"
	I1101 10:51:39.476914  494457 cri.go:89] found id: "6973f868af67739d0ca69e54523b07f8023a75440e79117e45dc08ac4cd4eadb"
	I1101 10:51:39.476964  494457 cri.go:89] found id: "e1d26269c43dedd8c98302d9e3982d65d35c7c8b81d14098592ae01842e55e1d"
	I1101 10:51:39.476996  494457 cri.go:89] found id: "354489decdc5be3f11a1c587685b3a87320c7a34e86b10f5cc6b354777034093"
	I1101 10:51:39.477016  494457 cri.go:89] found id: "66a7d9d0871f59caa5a654326fa6af58cf9a0cb60f71adebd11d70504d202a8f"
	I1101 10:51:39.477039  494457 cri.go:89] found id: "0de30b77d1ca10da59b96521a28d795e3e2f58d2bf5933e2fc6be1269644272f"
	I1101 10:51:39.477077  494457 cri.go:89] found id: "a312b63badfe91286205ab3f2506b1f28b4e42298c8d0022b0e1c17bcddc1e12"
	I1101 10:51:39.477096  494457 cri.go:89] found id: "0ef612cf67931e99b0ff0b2cd78a42bcb290e5834448357a04f331cca1ab13cc"
	I1101 10:51:39.477116  494457 cri.go:89] found id: "59e8eb3202b226a9242a2418d10ad312d3fe21ba3c8163fbf7bfede124b48607"
	I1101 10:51:39.477185  494457 cri.go:89] found id: "5bedbcf6adb0068c6e314a6cf2ff873b7938d2ba6ec9da57a634909cee70e1fc"
	I1101 10:51:39.477211  494457 cri.go:89] found id: "1052577ace73dab7a4b657cf5e7a7050b89edcf1f440e4859057a775cd3e4d49"
	I1101 10:51:39.477240  494457 cri.go:89] found id: ""
	I1101 10:51:39.477326  494457 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:39.490053  494457 retry.go:31] will retry after 264.388544ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:39Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:51:39.755627  494457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:51:39.781661  494457 pause.go:52] kubelet running: false
	I1101 10:51:39.781780  494457 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:51:39.998773  494457 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:51:39.998903  494457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:51:40.093899  494457 cri.go:89] found id: "7440e8684eb54b0b34c9156026be71a3e8ae7887c9adbe4447104c66b78e0d01"
	I1101 10:51:40.093978  494457 cri.go:89] found id: "6973f868af67739d0ca69e54523b07f8023a75440e79117e45dc08ac4cd4eadb"
	I1101 10:51:40.093998  494457 cri.go:89] found id: "e1d26269c43dedd8c98302d9e3982d65d35c7c8b81d14098592ae01842e55e1d"
	I1101 10:51:40.094021  494457 cri.go:89] found id: "354489decdc5be3f11a1c587685b3a87320c7a34e86b10f5cc6b354777034093"
	I1101 10:51:40.094054  494457 cri.go:89] found id: "66a7d9d0871f59caa5a654326fa6af58cf9a0cb60f71adebd11d70504d202a8f"
	I1101 10:51:40.094079  494457 cri.go:89] found id: "0de30b77d1ca10da59b96521a28d795e3e2f58d2bf5933e2fc6be1269644272f"
	I1101 10:51:40.094120  494457 cri.go:89] found id: "a312b63badfe91286205ab3f2506b1f28b4e42298c8d0022b0e1c17bcddc1e12"
	I1101 10:51:40.094144  494457 cri.go:89] found id: "0ef612cf67931e99b0ff0b2cd78a42bcb290e5834448357a04f331cca1ab13cc"
	I1101 10:51:40.094165  494457 cri.go:89] found id: "59e8eb3202b226a9242a2418d10ad312d3fe21ba3c8163fbf7bfede124b48607"
	I1101 10:51:40.094217  494457 cri.go:89] found id: "5bedbcf6adb0068c6e314a6cf2ff873b7938d2ba6ec9da57a634909cee70e1fc"
	I1101 10:51:40.094239  494457 cri.go:89] found id: "1052577ace73dab7a4b657cf5e7a7050b89edcf1f440e4859057a775cd3e4d49"
	I1101 10:51:40.094260  494457 cri.go:89] found id: ""
	I1101 10:51:40.094452  494457 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:40.110820  494457 retry.go:31] will retry after 491.96381ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:40Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:51:40.603630  494457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:51:40.623028  494457 pause.go:52] kubelet running: false
	I1101 10:51:40.623233  494457 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:51:40.900277  494457 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:51:40.900460  494457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:51:41.034997  494457 cri.go:89] found id: "7440e8684eb54b0b34c9156026be71a3e8ae7887c9adbe4447104c66b78e0d01"
	I1101 10:51:41.035093  494457 cri.go:89] found id: "6973f868af67739d0ca69e54523b07f8023a75440e79117e45dc08ac4cd4eadb"
	I1101 10:51:41.035115  494457 cri.go:89] found id: "e1d26269c43dedd8c98302d9e3982d65d35c7c8b81d14098592ae01842e55e1d"
	I1101 10:51:41.035134  494457 cri.go:89] found id: "354489decdc5be3f11a1c587685b3a87320c7a34e86b10f5cc6b354777034093"
	I1101 10:51:41.035166  494457 cri.go:89] found id: "66a7d9d0871f59caa5a654326fa6af58cf9a0cb60f71adebd11d70504d202a8f"
	I1101 10:51:41.035189  494457 cri.go:89] found id: "0de30b77d1ca10da59b96521a28d795e3e2f58d2bf5933e2fc6be1269644272f"
	I1101 10:51:41.035216  494457 cri.go:89] found id: "a312b63badfe91286205ab3f2506b1f28b4e42298c8d0022b0e1c17bcddc1e12"
	I1101 10:51:41.035258  494457 cri.go:89] found id: "0ef612cf67931e99b0ff0b2cd78a42bcb290e5834448357a04f331cca1ab13cc"
	I1101 10:51:41.035281  494457 cri.go:89] found id: "59e8eb3202b226a9242a2418d10ad312d3fe21ba3c8163fbf7bfede124b48607"
	I1101 10:51:41.035310  494457 cri.go:89] found id: "5bedbcf6adb0068c6e314a6cf2ff873b7938d2ba6ec9da57a634909cee70e1fc"
	I1101 10:51:41.035351  494457 cri.go:89] found id: "1052577ace73dab7a4b657cf5e7a7050b89edcf1f440e4859057a775cd3e4d49"
	I1101 10:51:41.035373  494457 cri.go:89] found id: ""
	I1101 10:51:41.035500  494457 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:41.050872  494457 out.go:203] 
	W1101 10:51:41.054020  494457 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:51:41.054050  494457 out.go:285] * 
	* 
	W1101 10:51:41.062064  494457 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:51:41.067160  494457 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-499088 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-499088
helpers_test.go:243: (dbg) docker inspect embed-certs-499088:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3",
	        "Created": "2025-11-01T10:48:58.141820601Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488414,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:50:38.852478778Z",
	            "FinishedAt": "2025-11-01T10:50:37.992079949Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/hostname",
	        "HostsPath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/hosts",
	        "LogPath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3-json.log",
	        "Name": "/embed-certs-499088",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-499088:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-499088",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3",
	                "LowerDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-499088",
	                "Source": "/var/lib/docker/volumes/embed-certs-499088/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-499088",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-499088",
	                "name.minikube.sigs.k8s.io": "embed-certs-499088",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7cebbd817087fa26e249775557a776a2cdc373cf0a9c1e61a3b8d43f90e7e46",
	            "SandboxKey": "/var/run/docker/netns/b7cebbd81708",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-499088": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:1f:4b:33:d5:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b7910a68b927d6e29fdad9c6f3b7dabb12d2d1799598af6a052e70fa72598bc5",
	                    "EndpointID": "6c8ee55baa89f01a9a7bb5bb4af2b879f40fa84ba71b4adee59bf41bd3698876",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-499088",
	                        "495a58a1ddf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-499088 -n embed-certs-499088
E1101 10:51:41.347972  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-499088 -n embed-certs-499088: exit status 2 (482.401653ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-499088 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-499088 logs -n 25: (1.742907741s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:47 UTC │
	│ image   │ old-k8s-version-245622 image list --format=json                                                                                                                                                                                               │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │ 01 Nov 25 10:47 UTC │
	│ pause   │ -p old-k8s-version-245622 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │                     │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p cert-expiration-308600                                                                                                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-014050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-014050 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-014050 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ stop    │ -p embed-certs-499088 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable dashboard -p embed-certs-499088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:51 UTC │
	│ image   │ default-k8s-diff-port-014050 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ pause   │ -p default-k8s-diff-port-014050 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p disable-driver-mounts-514829                                                                                                                                                                                                               │ disable-driver-mounts-514829 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ image   │ embed-certs-499088 image list --format=json                                                                                                                                                                                                   │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ pause   │ -p embed-certs-499088 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:51:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:51:10.947095  491840 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:10.947337  491840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:10.947366  491840 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:10.947386  491840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:10.947704  491840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:51:10.948201  491840 out.go:368] Setting JSON to false
	I1101 10:51:10.949266  491840 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9223,"bootTime":1761985048,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:51:10.949375  491840 start.go:143] virtualization:  
	I1101 10:51:10.953145  491840 out.go:179] * [no-preload-548708] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:51:10.957205  491840 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:51:10.957334  491840 notify.go:221] Checking for updates...
	I1101 10:51:10.963629  491840 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:51:10.966713  491840 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:51:10.969717  491840 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:51:10.972703  491840 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:51:10.975579  491840 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:51:10.979108  491840 config.go:182] Loaded profile config "embed-certs-499088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:10.979224  491840 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:51:11.009071  491840 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:51:11.009221  491840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:51:11.080262  491840 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:51:11.069044067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:51:11.080371  491840 docker.go:319] overlay module found
	I1101 10:51:11.083783  491840 out.go:179] * Using the docker driver based on user configuration
	I1101 10:51:11.086679  491840 start.go:309] selected driver: docker
	I1101 10:51:11.086708  491840 start.go:930] validating driver "docker" against <nil>
	I1101 10:51:11.086742  491840 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:51:11.087478  491840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:51:11.148096  491840 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:51:11.138337242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:51:11.148287  491840 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:51:11.148533  491840 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:51:11.151402  491840 out.go:179] * Using Docker driver with root privileges
	I1101 10:51:11.154228  491840 cni.go:84] Creating CNI manager for ""
	I1101 10:51:11.154299  491840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:51:11.154314  491840 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:51:11.154413  491840 start.go:353] cluster config:
	{Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:51:11.157657  491840 out.go:179] * Starting "no-preload-548708" primary control-plane node in "no-preload-548708" cluster
	I1101 10:51:11.160618  491840 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:51:11.163572  491840 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:51:11.166453  491840 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:51:11.166551  491840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:51:11.166624  491840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/config.json ...
	I1101 10:51:11.166662  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/config.json: {Name:mkca8c713f716c8d4bacd660034fcee6498bc69e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:11.168484  491840 cache.go:107] acquiring lock: {Name:mk87c12063bfe6477c1b6ed8fc827cc60e9ca811 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.168660  491840 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 10:51:11.168680  491840 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.769765ms
	I1101 10:51:11.169514  491840 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 10:51:11.169561  491840 cache.go:107] acquiring lock: {Name:mk98f5306fba9c79ff24fb30add0aac4b2ea9d11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.170394  491840 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:11.170788  491840 cache.go:107] acquiring lock: {Name:mkd94f63e239c14a2fc215ef4549c0b3008ae371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.170986  491840 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:11.171282  491840 cache.go:107] acquiring lock: {Name:mke31e546546420a97a22fb575f397eaa8d20c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.171387  491840 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:11.171517  491840 cache.go:107] acquiring lock: {Name:mk4c1242d2913ae89c6c2d48e391247cfb4b6c0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.171619  491840 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:11.171864  491840 cache.go:107] acquiring lock: {Name:mkf516acd2e5d0c72111e5669f8226bc99c3850c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.172001  491840 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 10:51:11.172125  491840 cache.go:107] acquiring lock: {Name:mk3196340dda3f6ca3036b488f880ffd822482f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.172231  491840 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:11.172428  491840 cache.go:107] acquiring lock: {Name:mk0a26100d6da9ffb6e62c9df95140af96aec6f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.172520  491840 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:11.174391  491840 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 10:51:11.176524  491840 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:11.176704  491840 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:11.176850  491840 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:11.177024  491840 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:11.177174  491840 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:11.177330  491840 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:11.192766  491840 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:51:11.192791  491840 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:51:11.192807  491840 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:51:11.192830  491840 start.go:360] acquireMachinesLock for no-preload-548708: {Name:mk9ab5039a75ce95aea667171fcdfabc6fc7786c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.193002  491840 start.go:364] duration metric: took 152.387µs to acquireMachinesLock for "no-preload-548708"
	I1101 10:51:11.193036  491840 start.go:93] Provisioning new machine with config: &{Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:51:11.193117  491840 start.go:125] createHost starting for "" (driver="docker")
	W1101 10:51:09.838678  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:51:11.839927  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	I1101 10:51:11.198440  491840 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:51:11.198681  491840 start.go:159] libmachine.API.Create for "no-preload-548708" (driver="docker")
	I1101 10:51:11.198732  491840 client.go:173] LocalClient.Create starting
	I1101 10:51:11.198820  491840 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem
	I1101 10:51:11.198864  491840 main.go:143] libmachine: Decoding PEM data...
	I1101 10:51:11.198878  491840 main.go:143] libmachine: Parsing certificate...
	I1101 10:51:11.198939  491840 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem
	I1101 10:51:11.198956  491840 main.go:143] libmachine: Decoding PEM data...
	I1101 10:51:11.198966  491840 main.go:143] libmachine: Parsing certificate...
	I1101 10:51:11.199337  491840 cli_runner.go:164] Run: docker network inspect no-preload-548708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:51:11.226076  491840 cli_runner.go:211] docker network inspect no-preload-548708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:51:11.226155  491840 network_create.go:284] running [docker network inspect no-preload-548708] to gather additional debugging logs...
	I1101 10:51:11.226176  491840 cli_runner.go:164] Run: docker network inspect no-preload-548708
	W1101 10:51:11.241410  491840 cli_runner.go:211] docker network inspect no-preload-548708 returned with exit code 1
	I1101 10:51:11.241437  491840 network_create.go:287] error running [docker network inspect no-preload-548708]: docker network inspect no-preload-548708: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-548708 not found
	I1101 10:51:11.241450  491840 network_create.go:289] output of [docker network inspect no-preload-548708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-548708 not found
	
	** /stderr **
	I1101 10:51:11.241547  491840 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:51:11.263212  491840 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5e2665991a3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:25:1a:f9:12:ec} reservation:<nil>}
	I1101 10:51:11.263671  491840 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-adecbbb769f0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:b0:b5:2e:4c:30} reservation:<nil>}
	I1101 10:51:11.263953  491840 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2077d26d1806 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:49:68:b6:9e:fb} reservation:<nil>}
	I1101 10:51:11.264358  491840 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b7910a68b927 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:9a:fb:6d:bb:9f} reservation:<nil>}
	I1101 10:51:11.264913  491840 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c21e70}
	I1101 10:51:11.264977  491840 network_create.go:124] attempt to create docker network no-preload-548708 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 10:51:11.265049  491840 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-548708 no-preload-548708
	I1101 10:51:11.346310  491840 network_create.go:108] docker network no-preload-548708 192.168.85.0/24 created
	I1101 10:51:11.346392  491840 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-548708" container
	I1101 10:51:11.346519  491840 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:51:11.362119  491840 cli_runner.go:164] Run: docker volume create no-preload-548708 --label name.minikube.sigs.k8s.io=no-preload-548708 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:51:11.379541  491840 oci.go:103] Successfully created a docker volume no-preload-548708
	I1101 10:51:11.379633  491840 cli_runner.go:164] Run: docker run --rm --name no-preload-548708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-548708 --entrypoint /usr/bin/test -v no-preload-548708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:51:11.482710  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1101 10:51:11.497113  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 10:51:11.506013  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 10:51:11.518854  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 10:51:11.527226  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 10:51:11.530418  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1101 10:51:11.536008  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1101 10:51:11.536058  491840 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 364.196832ms
	I1101 10:51:11.536071  491840 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 10:51:11.540984  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 10:51:11.967221  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 10:51:11.967242  491840 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 795.727157ms
	I1101 10:51:11.967254  491840 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 10:51:12.050898  491840 oci.go:107] Successfully prepared a docker volume no-preload-548708
	I1101 10:51:12.050961  491840 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1101 10:51:12.051134  491840 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:51:12.051250  491840 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:51:12.118836  491840 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-548708 --name no-preload-548708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-548708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-548708 --network no-preload-548708 --ip 192.168.85.2 --volume no-preload-548708:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:51:12.490303  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 10:51:12.490453  491840 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.318019822s
	I1101 10:51:12.490469  491840 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 10:51:12.543605  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 10:51:12.543682  491840 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.372402737s
	I1101 10:51:12.543720  491840 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 10:51:12.601454  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 10:51:12.601478  491840 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.430696745s
	I1101 10:51:12.601490  491840 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 10:51:12.619361  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Running}}
	I1101 10:51:12.653191  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:51:12.668582  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 10:51:12.668609  491840 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.499055544s
	I1101 10:51:12.668622  491840 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 10:51:12.686327  491840 cli_runner.go:164] Run: docker exec no-preload-548708 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:51:12.766734  491840 oci.go:144] the created container "no-preload-548708" has a running status.
	I1101 10:51:12.766762  491840 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa...
	I1101 10:51:13.200710  491840 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:51:13.247296  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:51:13.286179  491840 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:51:13.286207  491840 kic_runner.go:114] Args: [docker exec --privileged no-preload-548708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:51:13.396516  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:51:13.414376  491840 machine.go:94] provisionDockerMachine start ...
	I1101 10:51:13.414482  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:13.433538  491840 main.go:143] libmachine: Using SSH client type: native
	I1101 10:51:13.433879  491840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1101 10:51:13.433895  491840 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:51:13.434533  491840 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:51:13.561223  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 10:51:13.561254  491840 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.389131027s
	I1101 10:51:13.561274  491840 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 10:51:13.561313  491840 cache.go:87] Successfully saved all images to host disk.
	W1101 10:51:14.337069  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:51:16.337224  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:51:18.338239  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	I1101 10:51:16.588562  491840 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-548708
	
	I1101 10:51:16.588586  491840 ubuntu.go:182] provisioning hostname "no-preload-548708"
	I1101 10:51:16.588653  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:16.605862  491840 main.go:143] libmachine: Using SSH client type: native
	I1101 10:51:16.606176  491840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1101 10:51:16.606193  491840 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-548708 && echo "no-preload-548708" | sudo tee /etc/hostname
	I1101 10:51:16.770854  491840 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-548708
	
	I1101 10:51:16.770950  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:16.788896  491840 main.go:143] libmachine: Using SSH client type: native
	I1101 10:51:16.789253  491840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1101 10:51:16.789282  491840 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-548708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-548708/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-548708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:51:16.945277  491840 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:51:16.945315  491840 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:51:16.945370  491840 ubuntu.go:190] setting up certificates
	I1101 10:51:16.945381  491840 provision.go:84] configureAuth start
	I1101 10:51:16.945443  491840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:51:16.963866  491840 provision.go:143] copyHostCerts
	I1101 10:51:16.963936  491840 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:51:16.963950  491840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:51:16.964033  491840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:51:16.964162  491840 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:51:16.964175  491840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:51:16.964203  491840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:51:16.964270  491840 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:51:16.964278  491840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:51:16.964304  491840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:51:16.964365  491840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.no-preload-548708 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-548708]
	I1101 10:51:17.502066  491840 provision.go:177] copyRemoteCerts
	I1101 10:51:17.502150  491840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:51:17.502193  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:17.521208  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:51:17.628807  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:51:17.652329  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:51:17.671727  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:51:17.690971  491840 provision.go:87] duration metric: took 745.553231ms to configureAuth
	I1101 10:51:17.691045  491840 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:51:17.691299  491840 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:17.691455  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:17.710666  491840 main.go:143] libmachine: Using SSH client type: native
	I1101 10:51:17.710975  491840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1101 10:51:17.710996  491840 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:51:18.071414  491840 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:51:18.071440  491840 machine.go:97] duration metric: took 4.657045409s to provisionDockerMachine
	I1101 10:51:18.071458  491840 client.go:176] duration metric: took 6.87270752s to LocalClient.Create
	I1101 10:51:18.071480  491840 start.go:167] duration metric: took 6.872800067s to libmachine.API.Create "no-preload-548708"
	I1101 10:51:18.071492  491840 start.go:293] postStartSetup for "no-preload-548708" (driver="docker")
	I1101 10:51:18.071503  491840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:51:18.071588  491840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:51:18.071647  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:18.090205  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:51:18.197172  491840 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:51:18.200469  491840 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:51:18.200498  491840 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:51:18.200511  491840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:51:18.200584  491840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:51:18.200664  491840 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:51:18.200771  491840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:51:18.208241  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:51:18.227109  491840 start.go:296] duration metric: took 155.6025ms for postStartSetup
	I1101 10:51:18.227514  491840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:51:18.244551  491840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/config.json ...
	I1101 10:51:18.244838  491840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:51:18.244891  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:18.262008  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:51:18.366442  491840 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:51:18.371507  491840 start.go:128] duration metric: took 7.178367436s to createHost
	I1101 10:51:18.371538  491840 start.go:83] releasing machines lock for "no-preload-548708", held for 7.178518649s
	I1101 10:51:18.371625  491840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:51:18.391202  491840 ssh_runner.go:195] Run: cat /version.json
	I1101 10:51:18.391218  491840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:51:18.391256  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:18.391287  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:18.417234  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:51:18.418056  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:51:18.520707  491840 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:18.623345  491840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:51:18.659043  491840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:51:18.663855  491840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:51:18.663929  491840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:51:18.696500  491840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:51:18.696574  491840 start.go:496] detecting cgroup driver to use...
	I1101 10:51:18.696625  491840 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:51:18.696707  491840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:51:18.716416  491840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:51:18.729707  491840 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:51:18.729770  491840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:51:18.746493  491840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:51:18.765764  491840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:51:18.917297  491840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:51:19.048005  491840 docker.go:234] disabling docker service ...
	I1101 10:51:19.048102  491840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:51:19.070200  491840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:51:19.084347  491840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:51:19.232591  491840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:51:19.362395  491840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:51:19.376279  491840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:51:19.392177  491840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:51:19.392249  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.401793  491840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:51:19.401884  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.410980  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.420042  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.429942  491840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:51:19.438178  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.447232  491840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.461271  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.471147  491840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:51:19.479689  491840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:51:19.487211  491840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:51:19.611909  491840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:51:19.740592  491840 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:51:19.740707  491840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:51:19.744617  491840 start.go:564] Will wait 60s for crictl version
	I1101 10:51:19.744712  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:19.748470  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:51:19.777791  491840 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:51:19.777907  491840 ssh_runner.go:195] Run: crio --version
	I1101 10:51:19.814847  491840 ssh_runner.go:195] Run: crio --version
	I1101 10:51:19.850125  491840 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:51:19.853094  491840 cli_runner.go:164] Run: docker network inspect no-preload-548708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:51:19.869726  491840 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:51:19.874408  491840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:51:19.884428  491840 kubeadm.go:884] updating cluster {Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:51:19.884546  491840 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:51:19.884596  491840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:51:19.913426  491840 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 10:51:19.913454  491840 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 10:51:19.913497  491840 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:19.913694  491840 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:19.913783  491840 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:19.913868  491840 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:19.913958  491840 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:19.914046  491840 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 10:51:19.914147  491840 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:19.914234  491840 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:19.915275  491840 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 10:51:19.915536  491840 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:19.915672  491840 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:19.915827  491840 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:19.915972  491840 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:19.916453  491840 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:19.916970  491840 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:19.917013  491840 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.135156  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:20.137615  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1101 10:51:20.145257  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:20.146049  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:20.147193  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:20.153755  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.155100  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:20.277648  491840 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1101 10:51:20.277762  491840 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:20.277852  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.318970  491840 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1101 10:51:20.319095  491840 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1101 10:51:20.319214  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.363999  491840 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1101 10:51:20.364039  491840 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:20.364090  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.364163  491840 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1101 10:51:20.364180  491840 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:20.364201  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.364261  491840 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1101 10:51:20.364285  491840 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:20.364307  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.364370  491840 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1101 10:51:20.364387  491840 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.364410  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.364478  491840 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1101 10:51:20.364496  491840 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:20.364520  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.364609  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:20.364668  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:51:20.414396  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:51:20.414475  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:20.414533  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:20.414593  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:20.414656  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.414710  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:20.414788  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:20.528560  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:20.528637  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:51:20.528685  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:20.528742  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:20.534173  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.534253  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:20.534315  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:20.646924  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 10:51:20.647108  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:51:20.647250  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:20.647358  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1101 10:51:20.647504  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:20.647558  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1101 10:51:20.651218  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:20.651486  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:20.651621  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.727206  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1101 10:51:20.727312  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1101 10:51:20.727472  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 10:51:20.727368  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1101 10:51:20.727538  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1101 10:51:20.727610  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1101 10:51:20.727679  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:51:20.727796  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:51:20.763140  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 10:51:20.763243  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:51:20.763329  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 10:51:20.763405  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:51:20.763459  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 10:51:20.763510  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:51:20.763564  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1101 10:51:20.763582  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1101 10:51:20.763620  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1101 10:51:20.763634  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1101 10:51:20.808444  491840 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1101 10:51:20.808512  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1101 10:51:20.817952  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1101 10:51:20.817990  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1101 10:51:20.818038  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1101 10:51:20.818054  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1101 10:51:20.818083  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1101 10:51:20.818097  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	W1101 10:51:20.338689  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:51:22.837155  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	I1101 10:51:24.337958  488285 pod_ready.go:94] pod "coredns-66bc5c9577-pdh6r" is "Ready"
	I1101 10:51:24.337987  488285 pod_ready.go:86] duration metric: took 31.007222811s for pod "coredns-66bc5c9577-pdh6r" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.341357  488285 pod_ready.go:83] waiting for pod "etcd-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.347427  488285 pod_ready.go:94] pod "etcd-embed-certs-499088" is "Ready"
	I1101 10:51:24.347458  488285 pod_ready.go:86] duration metric: took 6.06948ms for pod "etcd-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.350580  488285 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.356228  488285 pod_ready.go:94] pod "kube-apiserver-embed-certs-499088" is "Ready"
	I1101 10:51:24.356260  488285 pod_ready.go:86] duration metric: took 5.649004ms for pod "kube-apiserver-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.364054  488285 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.534602  488285 pod_ready.go:94] pod "kube-controller-manager-embed-certs-499088" is "Ready"
	I1101 10:51:24.534631  488285 pod_ready.go:86] duration metric: took 170.548921ms for pod "kube-controller-manager-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.735367  488285 pod_ready.go:83] waiting for pod "kube-proxy-dqf86" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:25.135342  488285 pod_ready.go:94] pod "kube-proxy-dqf86" is "Ready"
	I1101 10:51:25.135379  488285 pod_ready.go:86] duration metric: took 399.980843ms for pod "kube-proxy-dqf86" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:25.335244  488285 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:25.735718  488285 pod_ready.go:94] pod "kube-scheduler-embed-certs-499088" is "Ready"
	I1101 10:51:25.735744  488285 pod_ready.go:86] duration metric: took 400.476675ms for pod "kube-scheduler-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:25.735758  488285 pod_ready.go:40] duration metric: took 32.409170234s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:51:25.805442  488285 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:51:25.809343  488285 out.go:179] * Done! kubectl is now configured to use "embed-certs-499088" cluster and "default" namespace by default
	I1101 10:51:21.198019  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	W1101 10:51:21.240534  491840 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1101 10:51:21.240712  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:21.342108  491840 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:51:21.342182  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:51:21.414845  491840 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1101 10:51:21.414892  491840 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:21.414942  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:23.107654  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.765444351s)
	I1101 10:51:23.107687  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1101 10:51:23.107706  491840 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:51:23.107749  491840 ssh_runner.go:235] Completed: which crictl: (1.692793073s)
	I1101 10:51:23.107827  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:51:23.107876  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:24.914124  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.806270888s)
	I1101 10:51:24.914152  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1101 10:51:24.914170  491840 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:51:24.914219  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:51:24.914283  491840 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.806374635s)
	I1101 10:51:24.914321  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:26.323948  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.409702687s)
	I1101 10:51:26.323983  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1101 10:51:26.323987  491840 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.409650083s)
	I1101 10:51:26.324005  491840 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:51:26.324051  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:51:26.324054  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:27.729497  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.405421694s)
	I1101 10:51:27.729525  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1101 10:51:27.729537  491840 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.40546597s)
	I1101 10:51:27.729589  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 10:51:27.729544  491840 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:51:27.729668  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:51:27.729676  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:51:29.167381  491840 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.437684526s)
	I1101 10:51:29.167411  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1101 10:51:29.167437  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1101 10:51:29.167578  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.437900019s)
	I1101 10:51:29.167595  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1101 10:51:29.167611  491840 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:51:29.167651  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:51:32.932132  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.764453137s)
	I1101 10:51:32.932161  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1101 10:51:32.932180  491840 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:51:32.932228  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:51:33.497298  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 10:51:33.497330  491840 cache_images.go:125] Successfully loaded all cached images
	I1101 10:51:33.497337  491840 cache_images.go:94] duration metric: took 13.583866035s to LoadCachedImages
	I1101 10:51:33.497347  491840 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:51:33.497436  491840 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-548708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:51:33.497517  491840 ssh_runner.go:195] Run: crio config
	I1101 10:51:33.575787  491840 cni.go:84] Creating CNI manager for ""
	I1101 10:51:33.575813  491840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:51:33.575831  491840 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:51:33.575856  491840 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-548708 NodeName:no-preload-548708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:51:33.575987  491840 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-548708"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:51:33.576065  491840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:51:33.585491  491840 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1101 10:51:33.585560  491840 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1101 10:51:33.594300  491840 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1101 10:51:33.594461  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1101 10:51:33.594819  491840 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1101 10:51:33.594866  491840 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1101 10:51:33.599654  491840 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1101 10:51:33.599751  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1101 10:51:34.414831  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1101 10:51:34.419180  491840 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1101 10:51:34.419215  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1101 10:51:34.572735  491840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:51:34.614591  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1101 10:51:34.622811  491840 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1101 10:51:34.623009  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1101 10:51:35.075169  491840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:51:35.085650  491840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:51:35.101291  491840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:51:35.116015  491840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 10:51:35.130495  491840 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:51:35.134742  491840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:51:35.144880  491840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:51:35.277991  491840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:51:35.300377  491840 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708 for IP: 192.168.85.2
	I1101 10:51:35.300396  491840 certs.go:195] generating shared ca certs ...
	I1101 10:51:35.300413  491840 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:35.300551  491840 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:51:35.300607  491840 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:51:35.300615  491840 certs.go:257] generating profile certs ...
	I1101 10:51:35.300669  491840 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.key
	I1101 10:51:35.300679  491840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt with IP's: []
	I1101 10:51:35.587432  491840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt ...
	I1101 10:51:35.587466  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: {Name:mk5eecb53de2e7b31296c469aa0fcf5576099ce8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:35.587667  491840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.key ...
	I1101 10:51:35.587685  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.key: {Name:mk97e7571d9b0cfdf071850bdfb54a6f4112332d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:35.587783  491840 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key.71cdcdd3
	I1101 10:51:35.587801  491840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt.71cdcdd3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:51:35.970711  491840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt.71cdcdd3 ...
	I1101 10:51:35.970743  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt.71cdcdd3: {Name:mkd6571c82be51a52a39f02101721cfb4c8d3e97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:35.970965  491840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key.71cdcdd3 ...
	I1101 10:51:35.970984  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key.71cdcdd3: {Name:mk61f24f769a1202305257f37f3377591456bec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:35.971075  491840 certs.go:382] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt.71cdcdd3 -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt
	I1101 10:51:35.971161  491840 certs.go:386] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key.71cdcdd3 -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key
	I1101 10:51:35.971221  491840 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key
	I1101 10:51:35.971239  491840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.crt with IP's: []
	I1101 10:51:36.464548  491840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.crt ...
	I1101 10:51:36.464582  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.crt: {Name:mkf2ea3aa30ff0861fcfc606ae4a49fcd48cd025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:36.464773  491840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key ...
	I1101 10:51:36.464788  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key: {Name:mkf70d7ffe2a37d9712976bed8f0a87a77196116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:36.465024  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:51:36.465068  491840 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:51:36.465081  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:51:36.465110  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:51:36.465138  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:51:36.465164  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:51:36.465209  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:51:36.465762  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:51:36.486700  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:51:36.507116  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:51:36.527957  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:51:36.548084  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:51:36.567057  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:51:36.585446  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:51:36.603600  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:51:36.621912  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:51:36.640242  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:51:36.658854  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:51:36.676884  491840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:51:36.691261  491840 ssh_runner.go:195] Run: openssl version
	I1101 10:51:36.700480  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:51:36.711343  491840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:51:36.716009  491840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:51:36.716079  491840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:51:36.758162  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:51:36.766931  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:51:36.776205  491840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:51:36.780845  491840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:51:36.780914  491840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:51:36.823482  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:51:36.832657  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:51:36.841663  491840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:51:36.846104  491840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:51:36.846174  491840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:51:36.888399  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:51:36.897303  491840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:51:36.902696  491840 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:51:36.902775  491840 kubeadm.go:401] StartCluster: {Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:51:36.902871  491840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:51:36.902964  491840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:51:36.934175  491840 cri.go:89] found id: ""
	I1101 10:51:36.934312  491840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:51:36.945777  491840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:51:36.958992  491840 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:51:36.959067  491840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:51:36.967077  491840 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:51:36.967098  491840 kubeadm.go:158] found existing configuration files:
	
	I1101 10:51:36.967154  491840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:51:36.974971  491840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:51:36.975036  491840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:51:36.982998  491840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:51:36.991398  491840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:51:36.991517  491840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:51:36.999723  491840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:51:37.010384  491840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:51:37.010456  491840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:51:37.023120  491840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:51:37.032769  491840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:51:37.032839  491840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:51:37.041625  491840 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:51:37.109408  491840 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:51:37.109654  491840 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:51:37.177718  491840 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.271598671Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bbf54b36-d224-4c6d-a8a5-24aaadec88dd name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.272532374Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=82594960-938e-4be7-bd46-5896e1e31075 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.27265249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.290084012Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.291439192Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a14337c32829eb9127042a18ba9d54dd7b47d8b0bc82a787002ba796c2e15386/merged/etc/passwd: no such file or directory"
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.291616851Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a14337c32829eb9127042a18ba9d54dd7b47d8b0bc82a787002ba796c2e15386/merged/etc/group: no such file or directory"
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.291981285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.371687521Z" level=info msg="Created container 7440e8684eb54b0b34c9156026be71a3e8ae7887c9adbe4447104c66b78e0d01: kube-system/storage-provisioner/storage-provisioner" id=82594960-938e-4be7-bd46-5896e1e31075 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.372757858Z" level=info msg="Starting container: 7440e8684eb54b0b34c9156026be71a3e8ae7887c9adbe4447104c66b78e0d01" id=2ac6aed8-1d24-4f93-bf63-8069ded57d71 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.381667922Z" level=info msg="Started container" PID=1646 containerID=7440e8684eb54b0b34c9156026be71a3e8ae7887c9adbe4447104c66b78e0d01 description=kube-system/storage-provisioner/storage-provisioner id=2ac6aed8-1d24-4f93-bf63-8069ded57d71 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a75362f659e7f059f24dadde6bd5456f870dad9029347c007129f9f06601b5c
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.761583715Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.767988649Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.768023152Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.768050352Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.771206016Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.77126639Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.771290784Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.774388946Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.774420741Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.774441739Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.779022352Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.779082931Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.779104305Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.783344837Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.783396382Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7440e8684eb54       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   3a75362f659e7       storage-provisioner                          kube-system
	5bedbcf6adb00       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   afd9a48daee30       dashboard-metrics-scraper-6ffb444bf9-fr889   kubernetes-dashboard
	1052577ace73d       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago      Running             kubernetes-dashboard        0                   ba21ec66a9a91       kubernetes-dashboard-855c9754f9-tgcrm        kubernetes-dashboard
	6973f868af677       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   241d4faf39d79       coredns-66bc5c9577-pdh6r                     kube-system
	254d0250d4472       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   9418cc0499487       busybox                                      default
	e1d26269c43de       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   3a75362f659e7       storage-provisioner                          kube-system
	354489decdc5b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   c021690cd244a       kube-proxy-dqf86                             kube-system
	66a7d9d0871f5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   05f1bfa4d8fe6       kindnet-9sr9j                                kube-system
	0de30b77d1ca1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           55 seconds ago      Running             kube-controller-manager     1                   94483ef080498       kube-controller-manager-embed-certs-499088   kube-system
	a312b63badfe9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           55 seconds ago      Running             kube-apiserver              1                   ac50bcba8c2f5       kube-apiserver-embed-certs-499088            kube-system
	0ef612cf67931       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           55 seconds ago      Running             etcd                        1                   bf6b3fe24aad5       etcd-embed-certs-499088                      kube-system
	59e8eb3202b22       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           55 seconds ago      Running             kube-scheduler              1                   1d3b3f9dfdeb0       kube-scheduler-embed-certs-499088            kube-system
	
	
	==> coredns [6973f868af67739d0ca69e54523b07f8023a75440e79117e45dc08ac4cd4eadb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40481 - 443 "HINFO IN 1211577556126345724.1387487907648605909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020902743s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-499088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-499088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=embed-certs-499088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_49_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:49:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-499088
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:51:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:51:32 +0000   Sat, 01 Nov 2025 10:49:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:51:32 +0000   Sat, 01 Nov 2025 10:49:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:51:32 +0000   Sat, 01 Nov 2025 10:49:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:51:32 +0000   Sat, 01 Nov 2025 10:50:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-499088
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                07472705-003c-41a7-ae50-6d94d68f067a
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-pdh6r                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m12s
	  kube-system                 etcd-embed-certs-499088                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m17s
	  kube-system                 kindnet-9sr9j                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m13s
	  kube-system                 kube-apiserver-embed-certs-499088             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-controller-manager-embed-certs-499088    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-dqf86                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-scheduler-embed-certs-499088             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fr889    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tgcrm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m11s              kube-proxy       
	  Normal   Starting                 49s                kube-proxy       
	  Normal   Starting                 2m18s              kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m18s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m17s              kubelet          Node embed-certs-499088 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m17s              kubelet          Node embed-certs-499088 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m17s              kubelet          Node embed-certs-499088 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m14s              node-controller  Node embed-certs-499088 event: Registered Node embed-certs-499088 in Controller
	  Normal   NodeReady                91s                kubelet          Node embed-certs-499088 status is now: NodeReady
	  Normal   Starting                 57s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 57s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  56s (x8 over 57s)  kubelet          Node embed-certs-499088 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x8 over 57s)  kubelet          Node embed-certs-499088 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x8 over 57s)  kubelet          Node embed-certs-499088 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                node-controller  Node embed-certs-499088 event: Registered Node embed-certs-499088 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[ +42.559605] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:50] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0ef612cf67931e99b0ff0b2cd78a42bcb290e5834448357a04f331cca1ab13cc] <==
	{"level":"warn","ts":"2025-11-01T10:50:49.493017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.569202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.632253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.681140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.708193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.743492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.775438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.814055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.845748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.872434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.891829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.912132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.937559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.963783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.982416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.016118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.019924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.037248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.062640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.076022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.099199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.142208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.166372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.193670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.246093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41894","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:51:42 up  2:34,  0 user,  load average: 4.17, 3.58, 2.91
	Linux embed-certs-499088 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [66a7d9d0871f59caa5a654326fa6af58cf9a0cb60f71adebd11d70504d202a8f] <==
	I1101 10:50:52.479420       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:50:52.525332       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:50:52.525617       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:50:52.525824       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:50:52.525848       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:50:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:50:52.761636       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:50:52.761729       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:50:52.761740       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:50:52.763883       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:51:22.762288       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:51:22.763605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:51:22.763728       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:51:22.763818       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:51:24.361924       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:51:24.362017       1 metrics.go:72] Registering metrics
	I1101 10:51:24.362141       1 controller.go:711] "Syncing nftables rules"
	I1101 10:51:32.761237       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:51:32.761350       1 main.go:301] handling current node
	I1101 10:51:42.769848       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:51:42.769881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a312b63badfe91286205ab3f2506b1f28b4e42298c8d0022b0e1c17bcddc1e12] <==
	I1101 10:50:51.232390       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:50:51.244665       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:50:51.251709       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:50:51.251995       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:50:51.252060       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:50:51.257835       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:50:51.263123       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:50:51.263320       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:50:51.263359       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:50:51.263410       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:50:51.268570       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:50:51.268610       1 policy_source.go:240] refreshing policies
	E1101 10:50:51.279225       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:50:51.331630       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:50:51.845005       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:50:51.934289       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:50:52.035393       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:50:52.130956       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:50:52.201422       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:50:52.250505       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:50:52.608976       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.40.158"}
	I1101 10:50:52.665079       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.149.227"}
	I1101 10:50:54.827109       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:50:54.878991       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:50:54.976768       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0de30b77d1ca10da59b96521a28d795e3e2f58d2bf5933e2fc6be1269644272f] <==
	I1101 10:50:54.452386       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:50:54.455560       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:50:54.458778       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:50:54.459940       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:50:54.459991       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:50:54.460023       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:50:54.460036       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:50:54.460043       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:50:54.462121       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:50:54.467495       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:50:54.470130       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:50:54.471376       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:50:54.471420       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:50:54.471459       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:50:54.473055       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:50:54.473192       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:50:54.473291       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-499088"
	I1101 10:50:54.473361       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:50:54.478300       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:50:54.478782       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:50:54.480709       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:50:54.480721       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:50:54.487994       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:50:54.488021       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:50:54.488030       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [354489decdc5be3f11a1c587685b3a87320c7a34e86b10f5cc6b354777034093] <==
	I1101 10:50:52.745594       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:50:52.866580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:50:52.968457       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:50:52.968565       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:50:52.968721       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:50:52.996905       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:50:52.997356       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:50:53.015239       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:50:53.015639       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:50:53.015954       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:50:53.017889       1 config.go:200] "Starting service config controller"
	I1101 10:50:53.017963       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:50:53.018006       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:50:53.018054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:50:53.018094       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:50:53.018128       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:50:53.018798       1 config.go:309] "Starting node config controller"
	I1101 10:50:53.018869       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:50:53.018900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:50:53.118961       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:50:53.119153       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:50:53.119168       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [59e8eb3202b226a9242a2418d10ad312d3fe21ba3c8163fbf7bfede124b48607] <==
	I1101 10:50:48.588161       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:50:51.101109       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:50:51.101227       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:50:51.101263       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:50:51.101313       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:50:51.222668       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:50:51.222697       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:50:51.228843       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:50:51.228982       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:50:51.229003       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:50:51.229051       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:50:51.339986       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:50:52 embed-certs-499088 kubelet[778]: W1101 10:50:52.222732     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-05f1bfa4d8fe6539dabc59940530d691cfb02e49cd96ada5681ee446d2f8c43a WatchSource:0}: Error finding container 05f1bfa4d8fe6539dabc59940530d691cfb02e49cd96ada5681ee446d2f8c43a: Status 404 returned error can't find the container with id 05f1bfa4d8fe6539dabc59940530d691cfb02e49cd96ada5681ee446d2f8c43a
	Nov 01 10:50:52 embed-certs-499088 kubelet[778]: W1101 10:50:52.337852     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-9418cc04994875600e4bcbc570c98f8a8cc2307b94ab2805ad79ec7a13bbbc30 WatchSource:0}: Error finding container 9418cc04994875600e4bcbc570c98f8a8cc2307b94ab2805ad79ec7a13bbbc30: Status 404 returned error can't find the container with id 9418cc04994875600e4bcbc570c98f8a8cc2307b94ab2805ad79ec7a13bbbc30
	Nov 01 10:50:52 embed-certs-499088 kubelet[778]: W1101 10:50:52.354613     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-241d4faf39d79e39267b4c3d61ccc142b187a9a6e99e4757ac9ff6f50ba137de WatchSource:0}: Error finding container 241d4faf39d79e39267b4c3d61ccc142b187a9a6e99e4757ac9ff6f50ba137de: Status 404 returned error can't find the container with id 241d4faf39d79e39267b4c3d61ccc142b187a9a6e99e4757ac9ff6f50ba137de
	Nov 01 10:50:55 embed-certs-499088 kubelet[778]: I1101 10:50:55.261994     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4f4c8f6c-873f-4d2b-9488-d12c3adae611-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-tgcrm\" (UID: \"4f4c8f6c-873f-4d2b-9488-d12c3adae611\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tgcrm"
	Nov 01 10:50:55 embed-certs-499088 kubelet[778]: I1101 10:50:55.262573     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/60a4b187-9c7f-4438-921c-cf3017a7270b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fr889\" (UID: \"60a4b187-9c7f-4438-921c-cf3017a7270b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889"
	Nov 01 10:50:55 embed-certs-499088 kubelet[778]: I1101 10:50:55.262726     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5h7s\" (UniqueName: \"kubernetes.io/projected/60a4b187-9c7f-4438-921c-cf3017a7270b-kube-api-access-v5h7s\") pod \"dashboard-metrics-scraper-6ffb444bf9-fr889\" (UID: \"60a4b187-9c7f-4438-921c-cf3017a7270b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889"
	Nov 01 10:50:55 embed-certs-499088 kubelet[778]: I1101 10:50:55.262850     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc8kw\" (UniqueName: \"kubernetes.io/projected/4f4c8f6c-873f-4d2b-9488-d12c3adae611-kube-api-access-tc8kw\") pod \"kubernetes-dashboard-855c9754f9-tgcrm\" (UID: \"4f4c8f6c-873f-4d2b-9488-d12c3adae611\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tgcrm"
	Nov 01 10:50:55 embed-certs-499088 kubelet[778]: W1101 10:50:55.453959     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-ba21ec66a9a919029740869075f91fec1e8739cddc4011ccd3108408b841fe66 WatchSource:0}: Error finding container ba21ec66a9a919029740869075f91fec1e8739cddc4011ccd3108408b841fe66: Status 404 returned error can't find the container with id ba21ec66a9a919029740869075f91fec1e8739cddc4011ccd3108408b841fe66
	Nov 01 10:51:01 embed-certs-499088 kubelet[778]: I1101 10:51:01.365720     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tgcrm" podStartSLOduration=0.792840559 podStartE2EDuration="6.364182969s" podCreationTimestamp="2025-11-01 10:50:55 +0000 UTC" firstStartedPulling="2025-11-01 10:50:55.456588003 +0000 UTC m=+9.735303953" lastFinishedPulling="2025-11-01 10:51:01.027930421 +0000 UTC m=+15.306646363" observedRunningTime="2025-11-01 10:51:01.204734589 +0000 UTC m=+15.483450539" watchObservedRunningTime="2025-11-01 10:51:01.364182969 +0000 UTC m=+15.642898911"
	Nov 01 10:51:08 embed-certs-499088 kubelet[778]: I1101 10:51:08.214059     778 scope.go:117] "RemoveContainer" containerID="e7dddef74ef889f81c7dd211ffc87b748e8035b9cb2c5ab64ce618b3c42c4eaa"
	Nov 01 10:51:09 embed-certs-499088 kubelet[778]: I1101 10:51:09.221567     778 scope.go:117] "RemoveContainer" containerID="5cc6cb74f49db3cf1e43f9cd669afa79c218033e88f845791e3d61da11fab0d7"
	Nov 01 10:51:09 embed-certs-499088 kubelet[778]: E1101 10:51:09.221726     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fr889_kubernetes-dashboard(60a4b187-9c7f-4438-921c-cf3017a7270b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889" podUID="60a4b187-9c7f-4438-921c-cf3017a7270b"
	Nov 01 10:51:09 embed-certs-499088 kubelet[778]: I1101 10:51:09.222717     778 scope.go:117] "RemoveContainer" containerID="e7dddef74ef889f81c7dd211ffc87b748e8035b9cb2c5ab64ce618b3c42c4eaa"
	Nov 01 10:51:10 embed-certs-499088 kubelet[778]: I1101 10:51:10.225693     778 scope.go:117] "RemoveContainer" containerID="5cc6cb74f49db3cf1e43f9cd669afa79c218033e88f845791e3d61da11fab0d7"
	Nov 01 10:51:10 embed-certs-499088 kubelet[778]: E1101 10:51:10.225846     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fr889_kubernetes-dashboard(60a4b187-9c7f-4438-921c-cf3017a7270b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889" podUID="60a4b187-9c7f-4438-921c-cf3017a7270b"
	Nov 01 10:51:19 embed-certs-499088 kubelet[778]: I1101 10:51:19.105705     778 scope.go:117] "RemoveContainer" containerID="5cc6cb74f49db3cf1e43f9cd669afa79c218033e88f845791e3d61da11fab0d7"
	Nov 01 10:51:19 embed-certs-499088 kubelet[778]: I1101 10:51:19.256110     778 scope.go:117] "RemoveContainer" containerID="5cc6cb74f49db3cf1e43f9cd669afa79c218033e88f845791e3d61da11fab0d7"
	Nov 01 10:51:19 embed-certs-499088 kubelet[778]: I1101 10:51:19.256187     778 scope.go:117] "RemoveContainer" containerID="5bedbcf6adb0068c6e314a6cf2ff873b7938d2ba6ec9da57a634909cee70e1fc"
	Nov 01 10:51:19 embed-certs-499088 kubelet[778]: E1101 10:51:19.256472     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fr889_kubernetes-dashboard(60a4b187-9c7f-4438-921c-cf3017a7270b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889" podUID="60a4b187-9c7f-4438-921c-cf3017a7270b"
	Nov 01 10:51:23 embed-certs-499088 kubelet[778]: I1101 10:51:23.269805     778 scope.go:117] "RemoveContainer" containerID="e1d26269c43dedd8c98302d9e3982d65d35c7c8b81d14098592ae01842e55e1d"
	Nov 01 10:51:29 embed-certs-499088 kubelet[778]: I1101 10:51:29.106297     778 scope.go:117] "RemoveContainer" containerID="5bedbcf6adb0068c6e314a6cf2ff873b7938d2ba6ec9da57a634909cee70e1fc"
	Nov 01 10:51:29 embed-certs-499088 kubelet[778]: E1101 10:51:29.106493     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fr889_kubernetes-dashboard(60a4b187-9c7f-4438-921c-cf3017a7270b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889" podUID="60a4b187-9c7f-4438-921c-cf3017a7270b"
	Nov 01 10:51:39 embed-certs-499088 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:51:39 embed-certs-499088 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:51:39 embed-certs-499088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1052577ace73dab7a4b657cf5e7a7050b89edcf1f440e4859057a775cd3e4d49] <==
	2025/11/01 10:51:01 Using namespace: kubernetes-dashboard
	2025/11/01 10:51:01 Using in-cluster config to connect to apiserver
	2025/11/01 10:51:01 Using secret token for csrf signing
	2025/11/01 10:51:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:51:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:51:01 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:51:01 Generating JWE encryption key
	2025/11/01 10:51:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:51:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:51:01 Initializing JWE encryption key from synchronized object
	2025/11/01 10:51:01 Creating in-cluster Sidecar client
	2025/11/01 10:51:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:51:01 Serving insecurely on HTTP port: 9090
	2025/11/01 10:51:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:51:01 Starting overwatch
	
	
	==> storage-provisioner [7440e8684eb54b0b34c9156026be71a3e8ae7887c9adbe4447104c66b78e0d01] <==
	I1101 10:51:23.399175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:51:23.413792       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:51:23.413912       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:51:23.417664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:26.873630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:31.135569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:34.739284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:37.793027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:40.825913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:40.848981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:51:40.857448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:51:40.858097       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5491653a-fc59-4529-adde-932caf894aba", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-499088_fa8ea202-270f-4ff8-a1b1-1c37831af23e became leader
	I1101 10:51:40.870128       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-499088_fa8ea202-270f-4ff8-a1b1-1c37831af23e!
	W1101 10:51:40.938839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:40.961189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:51:40.973604       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-499088_fa8ea202-270f-4ff8-a1b1-1c37831af23e!
	W1101 10:51:42.965387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:42.972228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e1d26269c43dedd8c98302d9e3982d65d35c7c8b81d14098592ae01842e55e1d] <==
	I1101 10:50:52.644505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:51:22.654279       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-499088 -n embed-certs-499088
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-499088 -n embed-certs-499088: exit status 2 (445.407922ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-499088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-499088
helpers_test.go:243: (dbg) docker inspect embed-certs-499088:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3",
	        "Created": "2025-11-01T10:48:58.141820601Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488414,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:50:38.852478778Z",
	            "FinishedAt": "2025-11-01T10:50:37.992079949Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/hostname",
	        "HostsPath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/hosts",
	        "LogPath": "/var/lib/docker/containers/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3-json.log",
	        "Name": "/embed-certs-499088",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-499088:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-499088",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3",
	                "LowerDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a2ad846c176467413ab972883b361c633ca25799acd4b80676d1c74473269ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-499088",
	                "Source": "/var/lib/docker/volumes/embed-certs-499088/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-499088",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-499088",
	                "name.minikube.sigs.k8s.io": "embed-certs-499088",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7cebbd817087fa26e249775557a776a2cdc373cf0a9c1e61a3b8d43f90e7e46",
	            "SandboxKey": "/var/run/docker/netns/b7cebbd81708",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-499088": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:1f:4b:33:d5:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b7910a68b927d6e29fdad9c6f3b7dabb12d2d1799598af6a052e70fa72598bc5",
	                    "EndpointID": "6c8ee55baa89f01a9a7bb5bb4af2b879f40fa84ba71b4adee59bf41bd3698876",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-499088",
	                        "495a58a1ddf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-499088 -n embed-certs-499088
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-499088 -n embed-certs-499088: exit status 2 (454.627913ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-499088 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-499088 logs -n 25: (1.750325531s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:46 UTC │ 01 Nov 25 10:47 UTC │
	│ image   │ old-k8s-version-245622 image list --format=json                                                                                                                                                                                               │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │ 01 Nov 25 10:47 UTC │
	│ pause   │ -p old-k8s-version-245622 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:47 UTC │                     │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p cert-expiration-308600                                                                                                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-014050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-014050 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-014050 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ stop    │ -p embed-certs-499088 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable dashboard -p embed-certs-499088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:51 UTC │
	│ image   │ default-k8s-diff-port-014050 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ pause   │ -p default-k8s-diff-port-014050 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p disable-driver-mounts-514829                                                                                                                                                                                                               │ disable-driver-mounts-514829 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ image   │ embed-certs-499088 image list --format=json                                                                                                                                                                                                   │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ pause   │ -p embed-certs-499088 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:51:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:51:10.947095  491840 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:10.947337  491840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:10.947366  491840 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:10.947386  491840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:10.947704  491840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:51:10.948201  491840 out.go:368] Setting JSON to false
	I1101 10:51:10.949266  491840 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9223,"bootTime":1761985048,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:51:10.949375  491840 start.go:143] virtualization:  
	I1101 10:51:10.953145  491840 out.go:179] * [no-preload-548708] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:51:10.957205  491840 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:51:10.957334  491840 notify.go:221] Checking for updates...
	I1101 10:51:10.963629  491840 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:51:10.966713  491840 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:51:10.969717  491840 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:51:10.972703  491840 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:51:10.975579  491840 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:51:10.979108  491840 config.go:182] Loaded profile config "embed-certs-499088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:10.979224  491840 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:51:11.009071  491840 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:51:11.009221  491840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:51:11.080262  491840 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:51:11.069044067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:51:11.080371  491840 docker.go:319] overlay module found
	I1101 10:51:11.083783  491840 out.go:179] * Using the docker driver based on user configuration
	I1101 10:51:11.086679  491840 start.go:309] selected driver: docker
	I1101 10:51:11.086708  491840 start.go:930] validating driver "docker" against <nil>
	I1101 10:51:11.086742  491840 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:51:11.087478  491840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:51:11.148096  491840 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:51:11.138337242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:51:11.148287  491840 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:51:11.148533  491840 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:51:11.151402  491840 out.go:179] * Using Docker driver with root privileges
	I1101 10:51:11.154228  491840 cni.go:84] Creating CNI manager for ""
	I1101 10:51:11.154299  491840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:51:11.154314  491840 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:51:11.154413  491840 start.go:353] cluster config:
	{Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:51:11.157657  491840 out.go:179] * Starting "no-preload-548708" primary control-plane node in "no-preload-548708" cluster
	I1101 10:51:11.160618  491840 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:51:11.163572  491840 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:51:11.166453  491840 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:51:11.166551  491840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:51:11.166624  491840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/config.json ...
	I1101 10:51:11.166662  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/config.json: {Name:mkca8c713f716c8d4bacd660034fcee6498bc69e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:11.168484  491840 cache.go:107] acquiring lock: {Name:mk87c12063bfe6477c1b6ed8fc827cc60e9ca811 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.168660  491840 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 10:51:11.168680  491840 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.769765ms
	I1101 10:51:11.169514  491840 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 10:51:11.169561  491840 cache.go:107] acquiring lock: {Name:mk98f5306fba9c79ff24fb30add0aac4b2ea9d11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.170394  491840 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:11.170788  491840 cache.go:107] acquiring lock: {Name:mkd94f63e239c14a2fc215ef4549c0b3008ae371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.170986  491840 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:11.171282  491840 cache.go:107] acquiring lock: {Name:mke31e546546420a97a22fb575f397eaa8d20c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.171387  491840 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:11.171517  491840 cache.go:107] acquiring lock: {Name:mk4c1242d2913ae89c6c2d48e391247cfb4b6c0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.171619  491840 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:11.171864  491840 cache.go:107] acquiring lock: {Name:mkf516acd2e5d0c72111e5669f8226bc99c3850c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.172001  491840 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 10:51:11.172125  491840 cache.go:107] acquiring lock: {Name:mk3196340dda3f6ca3036b488f880ffd822482f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.172231  491840 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:11.172428  491840 cache.go:107] acquiring lock: {Name:mk0a26100d6da9ffb6e62c9df95140af96aec6f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.172520  491840 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:11.174391  491840 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 10:51:11.176524  491840 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:11.176704  491840 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:11.176850  491840 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:11.177024  491840 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:11.177174  491840 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:11.177330  491840 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:11.192766  491840 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:51:11.192791  491840 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:51:11.192807  491840 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:51:11.192830  491840 start.go:360] acquireMachinesLock for no-preload-548708: {Name:mk9ab5039a75ce95aea667171fcdfabc6fc7786c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:11.193002  491840 start.go:364] duration metric: took 152.387µs to acquireMachinesLock for "no-preload-548708"
	I1101 10:51:11.193036  491840 start.go:93] Provisioning new machine with config: &{Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:51:11.193117  491840 start.go:125] createHost starting for "" (driver="docker")
	W1101 10:51:09.838678  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:51:11.839927  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	I1101 10:51:11.198440  491840 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:51:11.198681  491840 start.go:159] libmachine.API.Create for "no-preload-548708" (driver="docker")
	I1101 10:51:11.198732  491840 client.go:173] LocalClient.Create starting
	I1101 10:51:11.198820  491840 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem
	I1101 10:51:11.198864  491840 main.go:143] libmachine: Decoding PEM data...
	I1101 10:51:11.198878  491840 main.go:143] libmachine: Parsing certificate...
	I1101 10:51:11.198939  491840 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem
	I1101 10:51:11.198956  491840 main.go:143] libmachine: Decoding PEM data...
	I1101 10:51:11.198966  491840 main.go:143] libmachine: Parsing certificate...
	I1101 10:51:11.199337  491840 cli_runner.go:164] Run: docker network inspect no-preload-548708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:51:11.226076  491840 cli_runner.go:211] docker network inspect no-preload-548708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:51:11.226155  491840 network_create.go:284] running [docker network inspect no-preload-548708] to gather additional debugging logs...
	I1101 10:51:11.226176  491840 cli_runner.go:164] Run: docker network inspect no-preload-548708
	W1101 10:51:11.241410  491840 cli_runner.go:211] docker network inspect no-preload-548708 returned with exit code 1
	I1101 10:51:11.241437  491840 network_create.go:287] error running [docker network inspect no-preload-548708]: docker network inspect no-preload-548708: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-548708 not found
	I1101 10:51:11.241450  491840 network_create.go:289] output of [docker network inspect no-preload-548708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-548708 not found
	
	** /stderr **
	I1101 10:51:11.241547  491840 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:51:11.263212  491840 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5e2665991a3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:25:1a:f9:12:ec} reservation:<nil>}
	I1101 10:51:11.263671  491840 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-adecbbb769f0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:b0:b5:2e:4c:30} reservation:<nil>}
	I1101 10:51:11.263953  491840 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2077d26d1806 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:49:68:b6:9e:fb} reservation:<nil>}
	I1101 10:51:11.264358  491840 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b7910a68b927 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:9a:fb:6d:bb:9f} reservation:<nil>}
	I1101 10:51:11.264913  491840 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c21e70}
	I1101 10:51:11.264977  491840 network_create.go:124] attempt to create docker network no-preload-548708 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 10:51:11.265049  491840 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-548708 no-preload-548708
	I1101 10:51:11.346310  491840 network_create.go:108] docker network no-preload-548708 192.168.85.0/24 created
	I1101 10:51:11.346392  491840 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-548708" container
	I1101 10:51:11.346519  491840 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:51:11.362119  491840 cli_runner.go:164] Run: docker volume create no-preload-548708 --label name.minikube.sigs.k8s.io=no-preload-548708 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:51:11.379541  491840 oci.go:103] Successfully created a docker volume no-preload-548708
	I1101 10:51:11.379633  491840 cli_runner.go:164] Run: docker run --rm --name no-preload-548708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-548708 --entrypoint /usr/bin/test -v no-preload-548708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:51:11.482710  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1101 10:51:11.497113  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 10:51:11.506013  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 10:51:11.518854  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 10:51:11.527226  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 10:51:11.530418  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1101 10:51:11.536008  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1101 10:51:11.536058  491840 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 364.196832ms
	I1101 10:51:11.536071  491840 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 10:51:11.540984  491840 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 10:51:11.967221  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 10:51:11.967242  491840 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 795.727157ms
	I1101 10:51:11.967254  491840 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 10:51:12.050898  491840 oci.go:107] Successfully prepared a docker volume no-preload-548708
	I1101 10:51:12.050961  491840 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1101 10:51:12.051134  491840 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:51:12.051250  491840 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:51:12.118836  491840 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-548708 --name no-preload-548708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-548708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-548708 --network no-preload-548708 --ip 192.168.85.2 --volume no-preload-548708:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:51:12.490303  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 10:51:12.490453  491840 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.318019822s
	I1101 10:51:12.490469  491840 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 10:51:12.543605  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 10:51:12.543682  491840 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.372402737s
	I1101 10:51:12.543720  491840 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 10:51:12.601454  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 10:51:12.601478  491840 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.430696745s
	I1101 10:51:12.601490  491840 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 10:51:12.619361  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Running}}
	I1101 10:51:12.653191  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:51:12.668582  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 10:51:12.668609  491840 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.499055544s
	I1101 10:51:12.668622  491840 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 10:51:12.686327  491840 cli_runner.go:164] Run: docker exec no-preload-548708 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:51:12.766734  491840 oci.go:144] the created container "no-preload-548708" has a running status.
	I1101 10:51:12.766762  491840 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa...
	I1101 10:51:13.200710  491840 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:51:13.247296  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:51:13.286179  491840 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:51:13.286207  491840 kic_runner.go:114] Args: [docker exec --privileged no-preload-548708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:51:13.396516  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:51:13.414376  491840 machine.go:94] provisionDockerMachine start ...
	I1101 10:51:13.414482  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:13.433538  491840 main.go:143] libmachine: Using SSH client type: native
	I1101 10:51:13.433879  491840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1101 10:51:13.433895  491840 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:51:13.434533  491840 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:51:13.561223  491840 cache.go:157] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 10:51:13.561254  491840 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.389131027s
	I1101 10:51:13.561274  491840 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 10:51:13.561313  491840 cache.go:87] Successfully saved all images to host disk.
	W1101 10:51:14.337069  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:51:16.337224  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:51:18.338239  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	I1101 10:51:16.588562  491840 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-548708
	
	I1101 10:51:16.588586  491840 ubuntu.go:182] provisioning hostname "no-preload-548708"
	I1101 10:51:16.588653  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:16.605862  491840 main.go:143] libmachine: Using SSH client type: native
	I1101 10:51:16.606176  491840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1101 10:51:16.606193  491840 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-548708 && echo "no-preload-548708" | sudo tee /etc/hostname
	I1101 10:51:16.770854  491840 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-548708
	
	I1101 10:51:16.770950  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:16.788896  491840 main.go:143] libmachine: Using SSH client type: native
	I1101 10:51:16.789253  491840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1101 10:51:16.789282  491840 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-548708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-548708/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-548708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:51:16.945277  491840 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:51:16.945315  491840 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:51:16.945370  491840 ubuntu.go:190] setting up certificates
	I1101 10:51:16.945381  491840 provision.go:84] configureAuth start
	I1101 10:51:16.945443  491840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:51:16.963866  491840 provision.go:143] copyHostCerts
	I1101 10:51:16.963936  491840 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:51:16.963950  491840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:51:16.964033  491840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:51:16.964162  491840 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:51:16.964175  491840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:51:16.964203  491840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:51:16.964270  491840 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:51:16.964278  491840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:51:16.964304  491840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:51:16.964365  491840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.no-preload-548708 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-548708]
	I1101 10:51:17.502066  491840 provision.go:177] copyRemoteCerts
	I1101 10:51:17.502150  491840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:51:17.502193  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:17.521208  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:51:17.628807  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:51:17.652329  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:51:17.671727  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:51:17.690971  491840 provision.go:87] duration metric: took 745.553231ms to configureAuth
	I1101 10:51:17.691045  491840 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:51:17.691299  491840 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:17.691455  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:17.710666  491840 main.go:143] libmachine: Using SSH client type: native
	I1101 10:51:17.710975  491840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1101 10:51:17.710996  491840 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:51:18.071414  491840 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:51:18.071440  491840 machine.go:97] duration metric: took 4.657045409s to provisionDockerMachine
	I1101 10:51:18.071458  491840 client.go:176] duration metric: took 6.87270752s to LocalClient.Create
	I1101 10:51:18.071480  491840 start.go:167] duration metric: took 6.872800067s to libmachine.API.Create "no-preload-548708"
	I1101 10:51:18.071492  491840 start.go:293] postStartSetup for "no-preload-548708" (driver="docker")
	I1101 10:51:18.071503  491840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:51:18.071588  491840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:51:18.071647  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:18.090205  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:51:18.197172  491840 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:51:18.200469  491840 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:51:18.200498  491840 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:51:18.200511  491840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:51:18.200584  491840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:51:18.200664  491840 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:51:18.200771  491840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:51:18.208241  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:51:18.227109  491840 start.go:296] duration metric: took 155.6025ms for postStartSetup
	I1101 10:51:18.227514  491840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:51:18.244551  491840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/config.json ...
	I1101 10:51:18.244838  491840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:51:18.244891  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:18.262008  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:51:18.366442  491840 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:51:18.371507  491840 start.go:128] duration metric: took 7.178367436s to createHost
	I1101 10:51:18.371538  491840 start.go:83] releasing machines lock for "no-preload-548708", held for 7.178518649s
	I1101 10:51:18.371625  491840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:51:18.391202  491840 ssh_runner.go:195] Run: cat /version.json
	I1101 10:51:18.391218  491840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:51:18.391256  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:18.391287  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:51:18.417234  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:51:18.418056  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:51:18.520707  491840 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:18.623345  491840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:51:18.659043  491840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:51:18.663855  491840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:51:18.663929  491840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:51:18.696500  491840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:51:18.696574  491840 start.go:496] detecting cgroup driver to use...
	I1101 10:51:18.696625  491840 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:51:18.696707  491840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:51:18.716416  491840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:51:18.729707  491840 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:51:18.729770  491840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:51:18.746493  491840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:51:18.765764  491840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:51:18.917297  491840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:51:19.048005  491840 docker.go:234] disabling docker service ...
	I1101 10:51:19.048102  491840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:51:19.070200  491840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:51:19.084347  491840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:51:19.232591  491840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:51:19.362395  491840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:51:19.376279  491840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:51:19.392177  491840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:51:19.392249  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.401793  491840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:51:19.401884  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.410980  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.420042  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.429942  491840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:51:19.438178  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.447232  491840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.461271  491840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:51:19.471147  491840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:51:19.479689  491840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:51:19.487211  491840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:51:19.611909  491840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:51:19.740592  491840 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:51:19.740707  491840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:51:19.744617  491840 start.go:564] Will wait 60s for crictl version
	I1101 10:51:19.744712  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:19.748470  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:51:19.777791  491840 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:51:19.777907  491840 ssh_runner.go:195] Run: crio --version
	I1101 10:51:19.814847  491840 ssh_runner.go:195] Run: crio --version
	I1101 10:51:19.850125  491840 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:51:19.853094  491840 cli_runner.go:164] Run: docker network inspect no-preload-548708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:51:19.869726  491840 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:51:19.874408  491840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:51:19.884428  491840 kubeadm.go:884] updating cluster {Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:51:19.884546  491840 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:51:19.884596  491840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:51:19.913426  491840 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 10:51:19.913454  491840 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 10:51:19.913497  491840 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:19.913694  491840 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:19.913783  491840 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:19.913868  491840 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:19.913958  491840 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:19.914046  491840 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 10:51:19.914147  491840 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:19.914234  491840 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:19.915275  491840 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 10:51:19.915536  491840 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:19.915672  491840 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:19.915827  491840 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:19.915972  491840 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:19.916453  491840 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:19.916970  491840 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:19.917013  491840 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.135156  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:20.137615  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1101 10:51:20.145257  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:20.146049  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:20.147193  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:20.153755  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.155100  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:20.277648  491840 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1101 10:51:20.277762  491840 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:20.277852  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.318970  491840 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1101 10:51:20.319095  491840 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1101 10:51:20.319214  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.363999  491840 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1101 10:51:20.364039  491840 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:20.364090  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.364163  491840 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1101 10:51:20.364180  491840 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:20.364201  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.364261  491840 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1101 10:51:20.364285  491840 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:20.364307  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.364370  491840 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1101 10:51:20.364387  491840 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.364410  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.364478  491840 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1101 10:51:20.364496  491840 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:20.364520  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:20.364609  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:20.364668  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:51:20.414396  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:51:20.414475  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:20.414533  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:20.414593  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:20.414656  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.414710  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:20.414788  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:20.528560  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:20.528637  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:51:20.528685  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:20.528742  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:51:20.534173  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.534253  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:20.534315  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:20.646924  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 10:51:20.647108  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:51:20.647250  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:51:20.647358  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1101 10:51:20.647504  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:51:20.647558  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1101 10:51:20.651218  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:51:20.651486  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:51:20.651621  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:51:20.727206  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1101 10:51:20.727312  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1101 10:51:20.727472  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 10:51:20.727368  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1101 10:51:20.727538  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1101 10:51:20.727610  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1101 10:51:20.727679  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:51:20.727796  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:51:20.763140  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 10:51:20.763243  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:51:20.763329  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 10:51:20.763405  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:51:20.763459  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 10:51:20.763510  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:51:20.763564  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1101 10:51:20.763582  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1101 10:51:20.763620  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1101 10:51:20.763634  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1101 10:51:20.808444  491840 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1101 10:51:20.808512  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1101 10:51:20.817952  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1101 10:51:20.817990  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1101 10:51:20.818038  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1101 10:51:20.818054  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1101 10:51:20.818083  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1101 10:51:20.818097  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	W1101 10:51:20.338689  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	W1101 10:51:22.837155  488285 pod_ready.go:104] pod "coredns-66bc5c9577-pdh6r" is not "Ready", error: <nil>
	I1101 10:51:24.337958  488285 pod_ready.go:94] pod "coredns-66bc5c9577-pdh6r" is "Ready"
	I1101 10:51:24.337987  488285 pod_ready.go:86] duration metric: took 31.007222811s for pod "coredns-66bc5c9577-pdh6r" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.341357  488285 pod_ready.go:83] waiting for pod "etcd-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.347427  488285 pod_ready.go:94] pod "etcd-embed-certs-499088" is "Ready"
	I1101 10:51:24.347458  488285 pod_ready.go:86] duration metric: took 6.06948ms for pod "etcd-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.350580  488285 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.356228  488285 pod_ready.go:94] pod "kube-apiserver-embed-certs-499088" is "Ready"
	I1101 10:51:24.356260  488285 pod_ready.go:86] duration metric: took 5.649004ms for pod "kube-apiserver-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.364054  488285 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.534602  488285 pod_ready.go:94] pod "kube-controller-manager-embed-certs-499088" is "Ready"
	I1101 10:51:24.534631  488285 pod_ready.go:86] duration metric: took 170.548921ms for pod "kube-controller-manager-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:24.735367  488285 pod_ready.go:83] waiting for pod "kube-proxy-dqf86" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:25.135342  488285 pod_ready.go:94] pod "kube-proxy-dqf86" is "Ready"
	I1101 10:51:25.135379  488285 pod_ready.go:86] duration metric: took 399.980843ms for pod "kube-proxy-dqf86" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:25.335244  488285 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:25.735718  488285 pod_ready.go:94] pod "kube-scheduler-embed-certs-499088" is "Ready"
	I1101 10:51:25.735744  488285 pod_ready.go:86] duration metric: took 400.476675ms for pod "kube-scheduler-embed-certs-499088" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:25.735758  488285 pod_ready.go:40] duration metric: took 32.409170234s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:51:25.805442  488285 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:51:25.809343  488285 out.go:179] * Done! kubectl is now configured to use "embed-certs-499088" cluster and "default" namespace by default
	I1101 10:51:21.198019  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	W1101 10:51:21.240534  491840 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1101 10:51:21.240712  491840 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:21.342108  491840 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:51:21.342182  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:51:21.414845  491840 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1101 10:51:21.414892  491840 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:21.414942  491840 ssh_runner.go:195] Run: which crictl
	I1101 10:51:23.107654  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.765444351s)
	I1101 10:51:23.107687  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1101 10:51:23.107706  491840 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:51:23.107749  491840 ssh_runner.go:235] Completed: which crictl: (1.692793073s)
	I1101 10:51:23.107827  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:51:23.107876  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:24.914124  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.806270888s)
	I1101 10:51:24.914152  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1101 10:51:24.914170  491840 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:51:24.914219  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:51:24.914283  491840 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.806374635s)
	I1101 10:51:24.914321  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:26.323948  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.409702687s)
	I1101 10:51:26.323983  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1101 10:51:26.323987  491840 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.409650083s)
	I1101 10:51:26.324005  491840 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:51:26.324051  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:51:26.324054  491840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:51:27.729497  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.405421694s)
	I1101 10:51:27.729525  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1101 10:51:27.729537  491840 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.40546597s)
	I1101 10:51:27.729589  491840 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 10:51:27.729544  491840 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:51:27.729668  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:51:27.729676  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:51:29.167381  491840 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.437684526s)
	I1101 10:51:29.167411  491840 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1101 10:51:29.167437  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1101 10:51:29.167578  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.437900019s)
	I1101 10:51:29.167595  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1101 10:51:29.167611  491840 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:51:29.167651  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:51:32.932132  491840 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.764453137s)
	I1101 10:51:32.932161  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1101 10:51:32.932180  491840 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:51:32.932228  491840 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:51:33.497298  491840 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 10:51:33.497330  491840 cache_images.go:125] Successfully loaded all cached images
	I1101 10:51:33.497337  491840 cache_images.go:94] duration metric: took 13.583866035s to LoadCachedImages
	I1101 10:51:33.497347  491840 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:51:33.497436  491840 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-548708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:51:33.497517  491840 ssh_runner.go:195] Run: crio config
	I1101 10:51:33.575787  491840 cni.go:84] Creating CNI manager for ""
	I1101 10:51:33.575813  491840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:51:33.575831  491840 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:51:33.575856  491840 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-548708 NodeName:no-preload-548708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:51:33.575987  491840 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-548708"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:51:33.576065  491840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:51:33.585491  491840 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1101 10:51:33.585560  491840 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1101 10:51:33.594300  491840 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1101 10:51:33.594461  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1101 10:51:33.594819  491840 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1101 10:51:33.594866  491840 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1101 10:51:33.599654  491840 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1101 10:51:33.599751  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1101 10:51:34.414831  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1101 10:51:34.419180  491840 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1101 10:51:34.419215  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1101 10:51:34.572735  491840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:51:34.614591  491840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1101 10:51:34.622811  491840 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1101 10:51:34.623009  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1101 10:51:35.075169  491840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:51:35.085650  491840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:51:35.101291  491840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:51:35.116015  491840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 10:51:35.130495  491840 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:51:35.134742  491840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:51:35.144880  491840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:51:35.277991  491840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:51:35.300377  491840 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708 for IP: 192.168.85.2
	I1101 10:51:35.300396  491840 certs.go:195] generating shared ca certs ...
	I1101 10:51:35.300413  491840 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:35.300551  491840 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:51:35.300607  491840 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:51:35.300615  491840 certs.go:257] generating profile certs ...
	I1101 10:51:35.300669  491840 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.key
	I1101 10:51:35.300679  491840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt with IP's: []
	I1101 10:51:35.587432  491840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt ...
	I1101 10:51:35.587466  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: {Name:mk5eecb53de2e7b31296c469aa0fcf5576099ce8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:35.587667  491840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.key ...
	I1101 10:51:35.587685  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.key: {Name:mk97e7571d9b0cfdf071850bdfb54a6f4112332d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:35.587783  491840 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key.71cdcdd3
	I1101 10:51:35.587801  491840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt.71cdcdd3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:51:35.970711  491840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt.71cdcdd3 ...
	I1101 10:51:35.970743  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt.71cdcdd3: {Name:mkd6571c82be51a52a39f02101721cfb4c8d3e97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:35.970965  491840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key.71cdcdd3 ...
	I1101 10:51:35.970984  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key.71cdcdd3: {Name:mk61f24f769a1202305257f37f3377591456bec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:35.971075  491840 certs.go:382] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt.71cdcdd3 -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt
	I1101 10:51:35.971161  491840 certs.go:386] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key.71cdcdd3 -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key
	I1101 10:51:35.971221  491840 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key
	I1101 10:51:35.971239  491840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.crt with IP's: []
	I1101 10:51:36.464548  491840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.crt ...
	I1101 10:51:36.464582  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.crt: {Name:mkf2ea3aa30ff0861fcfc606ae4a49fcd48cd025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:36.464773  491840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key ...
	I1101 10:51:36.464788  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key: {Name:mkf70d7ffe2a37d9712976bed8f0a87a77196116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:36.465024  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:51:36.465068  491840 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:51:36.465081  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:51:36.465110  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:51:36.465138  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:51:36.465164  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:51:36.465209  491840 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:51:36.465762  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:51:36.486700  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:51:36.507116  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:51:36.527957  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:51:36.548084  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:51:36.567057  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:51:36.585446  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:51:36.603600  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:51:36.621912  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:51:36.640242  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:51:36.658854  491840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:51:36.676884  491840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:51:36.691261  491840 ssh_runner.go:195] Run: openssl version
	I1101 10:51:36.700480  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:51:36.711343  491840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:51:36.716009  491840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:51:36.716079  491840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:51:36.758162  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:51:36.766931  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:51:36.776205  491840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:51:36.780845  491840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:51:36.780914  491840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:51:36.823482  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:51:36.832657  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:51:36.841663  491840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:51:36.846104  491840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:51:36.846174  491840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:51:36.888399  491840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:51:36.897303  491840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:51:36.902696  491840 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:51:36.902775  491840 kubeadm.go:401] StartCluster: {Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:51:36.902871  491840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:51:36.902964  491840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:51:36.934175  491840 cri.go:89] found id: ""
	I1101 10:51:36.934312  491840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:51:36.945777  491840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:51:36.958992  491840 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:51:36.959067  491840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:51:36.967077  491840 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:51:36.967098  491840 kubeadm.go:158] found existing configuration files:
	
	I1101 10:51:36.967154  491840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:51:36.974971  491840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:51:36.975036  491840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:51:36.982998  491840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:51:36.991398  491840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:51:36.991517  491840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:51:36.999723  491840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:51:37.010384  491840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:51:37.010456  491840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:51:37.023120  491840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:51:37.032769  491840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:51:37.032839  491840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:51:37.041625  491840 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:51:37.109408  491840 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:51:37.109654  491840 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:51:37.177718  491840 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.271598671Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bbf54b36-d224-4c6d-a8a5-24aaadec88dd name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.272532374Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=82594960-938e-4be7-bd46-5896e1e31075 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.27265249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.290084012Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.291439192Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a14337c32829eb9127042a18ba9d54dd7b47d8b0bc82a787002ba796c2e15386/merged/etc/passwd: no such file or directory"
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.291616851Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a14337c32829eb9127042a18ba9d54dd7b47d8b0bc82a787002ba796c2e15386/merged/etc/group: no such file or directory"
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.291981285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.371687521Z" level=info msg="Created container 7440e8684eb54b0b34c9156026be71a3e8ae7887c9adbe4447104c66b78e0d01: kube-system/storage-provisioner/storage-provisioner" id=82594960-938e-4be7-bd46-5896e1e31075 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.372757858Z" level=info msg="Starting container: 7440e8684eb54b0b34c9156026be71a3e8ae7887c9adbe4447104c66b78e0d01" id=2ac6aed8-1d24-4f93-bf63-8069ded57d71 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:51:23 embed-certs-499088 crio[650]: time="2025-11-01T10:51:23.381667922Z" level=info msg="Started container" PID=1646 containerID=7440e8684eb54b0b34c9156026be71a3e8ae7887c9adbe4447104c66b78e0d01 description=kube-system/storage-provisioner/storage-provisioner id=2ac6aed8-1d24-4f93-bf63-8069ded57d71 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a75362f659e7f059f24dadde6bd5456f870dad9029347c007129f9f06601b5c
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.761583715Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.767988649Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.768023152Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.768050352Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.771206016Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.77126639Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.771290784Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.774388946Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.774420741Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.774441739Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.779022352Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.779082931Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.779104305Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.783344837Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:51:32 embed-certs-499088 crio[650]: time="2025-11-01T10:51:32.783396382Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7440e8684eb54       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   3a75362f659e7       storage-provisioner                          kube-system
	5bedbcf6adb00       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   afd9a48daee30       dashboard-metrics-scraper-6ffb444bf9-fr889   kubernetes-dashboard
	1052577ace73d       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago      Running             kubernetes-dashboard        0                   ba21ec66a9a91       kubernetes-dashboard-855c9754f9-tgcrm        kubernetes-dashboard
	6973f868af677       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago      Running             coredns                     1                   241d4faf39d79       coredns-66bc5c9577-pdh6r                     kube-system
	254d0250d4472       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   9418cc0499487       busybox                                      default
	e1d26269c43de       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   3a75362f659e7       storage-provisioner                          kube-system
	354489decdc5b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago      Running             kube-proxy                  1                   c021690cd244a       kube-proxy-dqf86                             kube-system
	66a7d9d0871f5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   05f1bfa4d8fe6       kindnet-9sr9j                                kube-system
	0de30b77d1ca1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   94483ef080498       kube-controller-manager-embed-certs-499088   kube-system
	a312b63badfe9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   ac50bcba8c2f5       kube-apiserver-embed-certs-499088            kube-system
	0ef612cf67931       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   bf6b3fe24aad5       etcd-embed-certs-499088                      kube-system
	59e8eb3202b22       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   1d3b3f9dfdeb0       kube-scheduler-embed-certs-499088            kube-system
	
	
	==> coredns [6973f868af67739d0ca69e54523b07f8023a75440e79117e45dc08ac4cd4eadb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40481 - 443 "HINFO IN 1211577556126345724.1387487907648605909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020902743s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-499088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-499088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=embed-certs-499088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_49_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:49:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-499088
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:51:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:51:32 +0000   Sat, 01 Nov 2025 10:49:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:51:32 +0000   Sat, 01 Nov 2025 10:49:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:51:32 +0000   Sat, 01 Nov 2025 10:49:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:51:32 +0000   Sat, 01 Nov 2025 10:50:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-499088
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                07472705-003c-41a7-ae50-6d94d68f067a
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-pdh6r                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m15s
	  kube-system                 etcd-embed-certs-499088                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m20s
	  kube-system                 kindnet-9sr9j                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-embed-certs-499088             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-embed-certs-499088    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-dqf86                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-embed-certs-499088             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fr889    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tgcrm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m14s              kube-proxy       
	  Normal   Starting                 52s                kube-proxy       
	  Normal   Starting                 2m21s              kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m21s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m20s              kubelet          Node embed-certs-499088 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m20s              kubelet          Node embed-certs-499088 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m20s              kubelet          Node embed-certs-499088 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m17s              node-controller  Node embed-certs-499088 event: Registered Node embed-certs-499088 in Controller
	  Normal   NodeReady                94s                kubelet          Node embed-certs-499088 status is now: NodeReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 60s)  kubelet          Node embed-certs-499088 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 60s)  kubelet          Node embed-certs-499088 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 60s)  kubelet          Node embed-certs-499088 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                node-controller  Node embed-certs-499088 event: Registered Node embed-certs-499088 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[ +42.559605] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:50] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0ef612cf67931e99b0ff0b2cd78a42bcb290e5834448357a04f331cca1ab13cc] <==
	{"level":"warn","ts":"2025-11-01T10:50:49.493017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.569202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.632253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.681140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.708193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.743492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.775438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.814055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.845748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.872434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.891829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.912132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.937559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.963783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:49.982416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.016118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.019924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.037248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.062640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.076022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.099199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.142208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.166372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.193670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:50:50.246093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41894","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:51:45 up  2:34,  0 user,  load average: 4.17, 3.58, 2.91
	Linux embed-certs-499088 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [66a7d9d0871f59caa5a654326fa6af58cf9a0cb60f71adebd11d70504d202a8f] <==
	I1101 10:50:52.479420       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:50:52.525332       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:50:52.525617       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:50:52.525824       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:50:52.525848       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:50:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:50:52.761636       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:50:52.761729       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:50:52.761740       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:50:52.763883       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:51:22.762288       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:51:22.763605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:51:22.763728       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:51:22.763818       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:51:24.361924       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:51:24.362017       1 metrics.go:72] Registering metrics
	I1101 10:51:24.362141       1 controller.go:711] "Syncing nftables rules"
	I1101 10:51:32.761237       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:51:32.761350       1 main.go:301] handling current node
	I1101 10:51:42.769848       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:51:42.769881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a312b63badfe91286205ab3f2506b1f28b4e42298c8d0022b0e1c17bcddc1e12] <==
	I1101 10:50:51.232390       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:50:51.244665       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:50:51.251709       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:50:51.251995       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:50:51.252060       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:50:51.257835       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:50:51.263123       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:50:51.263320       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:50:51.263359       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:50:51.263410       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:50:51.268570       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:50:51.268610       1 policy_source.go:240] refreshing policies
	E1101 10:50:51.279225       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:50:51.331630       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:50:51.845005       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:50:51.934289       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:50:52.035393       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:50:52.130956       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:50:52.201422       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:50:52.250505       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:50:52.608976       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.40.158"}
	I1101 10:50:52.665079       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.149.227"}
	I1101 10:50:54.827109       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:50:54.878991       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:50:54.976768       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0de30b77d1ca10da59b96521a28d795e3e2f58d2bf5933e2fc6be1269644272f] <==
	I1101 10:50:54.452386       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:50:54.455560       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:50:54.458778       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:50:54.459940       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:50:54.459991       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:50:54.460023       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:50:54.460036       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:50:54.460043       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:50:54.462121       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:50:54.467495       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:50:54.470130       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:50:54.471376       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:50:54.471420       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:50:54.471459       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:50:54.473055       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:50:54.473192       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:50:54.473291       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-499088"
	I1101 10:50:54.473361       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:50:54.478300       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:50:54.478782       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:50:54.480709       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:50:54.480721       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:50:54.487994       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:50:54.488021       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:50:54.488030       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [354489decdc5be3f11a1c587685b3a87320c7a34e86b10f5cc6b354777034093] <==
	I1101 10:50:52.745594       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:50:52.866580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:50:52.968457       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:50:52.968565       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:50:52.968721       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:50:52.996905       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:50:52.997356       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:50:53.015239       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:50:53.015639       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:50:53.015954       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:50:53.017889       1 config.go:200] "Starting service config controller"
	I1101 10:50:53.017963       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:50:53.018006       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:50:53.018054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:50:53.018094       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:50:53.018128       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:50:53.018798       1 config.go:309] "Starting node config controller"
	I1101 10:50:53.018869       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:50:53.018900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:50:53.118961       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:50:53.119153       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:50:53.119168       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [59e8eb3202b226a9242a2418d10ad312d3fe21ba3c8163fbf7bfede124b48607] <==
	I1101 10:50:48.588161       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:50:51.101109       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:50:51.101227       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:50:51.101263       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:50:51.101313       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:50:51.222668       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:50:51.222697       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:50:51.228843       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:50:51.228982       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:50:51.229003       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:50:51.229051       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:50:51.339986       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:50:52 embed-certs-499088 kubelet[778]: W1101 10:50:52.222732     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-05f1bfa4d8fe6539dabc59940530d691cfb02e49cd96ada5681ee446d2f8c43a WatchSource:0}: Error finding container 05f1bfa4d8fe6539dabc59940530d691cfb02e49cd96ada5681ee446d2f8c43a: Status 404 returned error can't find the container with id 05f1bfa4d8fe6539dabc59940530d691cfb02e49cd96ada5681ee446d2f8c43a
	Nov 01 10:50:52 embed-certs-499088 kubelet[778]: W1101 10:50:52.337852     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-9418cc04994875600e4bcbc570c98f8a8cc2307b94ab2805ad79ec7a13bbbc30 WatchSource:0}: Error finding container 9418cc04994875600e4bcbc570c98f8a8cc2307b94ab2805ad79ec7a13bbbc30: Status 404 returned error can't find the container with id 9418cc04994875600e4bcbc570c98f8a8cc2307b94ab2805ad79ec7a13bbbc30
	Nov 01 10:50:52 embed-certs-499088 kubelet[778]: W1101 10:50:52.354613     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-241d4faf39d79e39267b4c3d61ccc142b187a9a6e99e4757ac9ff6f50ba137de WatchSource:0}: Error finding container 241d4faf39d79e39267b4c3d61ccc142b187a9a6e99e4757ac9ff6f50ba137de: Status 404 returned error can't find the container with id 241d4faf39d79e39267b4c3d61ccc142b187a9a6e99e4757ac9ff6f50ba137de
	Nov 01 10:50:55 embed-certs-499088 kubelet[778]: I1101 10:50:55.261994     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4f4c8f6c-873f-4d2b-9488-d12c3adae611-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-tgcrm\" (UID: \"4f4c8f6c-873f-4d2b-9488-d12c3adae611\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tgcrm"
	Nov 01 10:50:55 embed-certs-499088 kubelet[778]: I1101 10:50:55.262573     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/60a4b187-9c7f-4438-921c-cf3017a7270b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fr889\" (UID: \"60a4b187-9c7f-4438-921c-cf3017a7270b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889"
	Nov 01 10:50:55 embed-certs-499088 kubelet[778]: I1101 10:50:55.262726     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5h7s\" (UniqueName: \"kubernetes.io/projected/60a4b187-9c7f-4438-921c-cf3017a7270b-kube-api-access-v5h7s\") pod \"dashboard-metrics-scraper-6ffb444bf9-fr889\" (UID: \"60a4b187-9c7f-4438-921c-cf3017a7270b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889"
	Nov 01 10:50:55 embed-certs-499088 kubelet[778]: I1101 10:50:55.262850     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc8kw\" (UniqueName: \"kubernetes.io/projected/4f4c8f6c-873f-4d2b-9488-d12c3adae611-kube-api-access-tc8kw\") pod \"kubernetes-dashboard-855c9754f9-tgcrm\" (UID: \"4f4c8f6c-873f-4d2b-9488-d12c3adae611\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tgcrm"
	Nov 01 10:50:55 embed-certs-499088 kubelet[778]: W1101 10:50:55.453959     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/495a58a1ddf7acebd106ed5e4a020ff1f563bc7912fb7d3d9d40eb40a7af3ab3/crio-ba21ec66a9a919029740869075f91fec1e8739cddc4011ccd3108408b841fe66 WatchSource:0}: Error finding container ba21ec66a9a919029740869075f91fec1e8739cddc4011ccd3108408b841fe66: Status 404 returned error can't find the container with id ba21ec66a9a919029740869075f91fec1e8739cddc4011ccd3108408b841fe66
	Nov 01 10:51:01 embed-certs-499088 kubelet[778]: I1101 10:51:01.365720     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tgcrm" podStartSLOduration=0.792840559 podStartE2EDuration="6.364182969s" podCreationTimestamp="2025-11-01 10:50:55 +0000 UTC" firstStartedPulling="2025-11-01 10:50:55.456588003 +0000 UTC m=+9.735303953" lastFinishedPulling="2025-11-01 10:51:01.027930421 +0000 UTC m=+15.306646363" observedRunningTime="2025-11-01 10:51:01.204734589 +0000 UTC m=+15.483450539" watchObservedRunningTime="2025-11-01 10:51:01.364182969 +0000 UTC m=+15.642898911"
	Nov 01 10:51:08 embed-certs-499088 kubelet[778]: I1101 10:51:08.214059     778 scope.go:117] "RemoveContainer" containerID="e7dddef74ef889f81c7dd211ffc87b748e8035b9cb2c5ab64ce618b3c42c4eaa"
	Nov 01 10:51:09 embed-certs-499088 kubelet[778]: I1101 10:51:09.221567     778 scope.go:117] "RemoveContainer" containerID="5cc6cb74f49db3cf1e43f9cd669afa79c218033e88f845791e3d61da11fab0d7"
	Nov 01 10:51:09 embed-certs-499088 kubelet[778]: E1101 10:51:09.221726     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fr889_kubernetes-dashboard(60a4b187-9c7f-4438-921c-cf3017a7270b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889" podUID="60a4b187-9c7f-4438-921c-cf3017a7270b"
	Nov 01 10:51:09 embed-certs-499088 kubelet[778]: I1101 10:51:09.222717     778 scope.go:117] "RemoveContainer" containerID="e7dddef74ef889f81c7dd211ffc87b748e8035b9cb2c5ab64ce618b3c42c4eaa"
	Nov 01 10:51:10 embed-certs-499088 kubelet[778]: I1101 10:51:10.225693     778 scope.go:117] "RemoveContainer" containerID="5cc6cb74f49db3cf1e43f9cd669afa79c218033e88f845791e3d61da11fab0d7"
	Nov 01 10:51:10 embed-certs-499088 kubelet[778]: E1101 10:51:10.225846     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fr889_kubernetes-dashboard(60a4b187-9c7f-4438-921c-cf3017a7270b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889" podUID="60a4b187-9c7f-4438-921c-cf3017a7270b"
	Nov 01 10:51:19 embed-certs-499088 kubelet[778]: I1101 10:51:19.105705     778 scope.go:117] "RemoveContainer" containerID="5cc6cb74f49db3cf1e43f9cd669afa79c218033e88f845791e3d61da11fab0d7"
	Nov 01 10:51:19 embed-certs-499088 kubelet[778]: I1101 10:51:19.256110     778 scope.go:117] "RemoveContainer" containerID="5cc6cb74f49db3cf1e43f9cd669afa79c218033e88f845791e3d61da11fab0d7"
	Nov 01 10:51:19 embed-certs-499088 kubelet[778]: I1101 10:51:19.256187     778 scope.go:117] "RemoveContainer" containerID="5bedbcf6adb0068c6e314a6cf2ff873b7938d2ba6ec9da57a634909cee70e1fc"
	Nov 01 10:51:19 embed-certs-499088 kubelet[778]: E1101 10:51:19.256472     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fr889_kubernetes-dashboard(60a4b187-9c7f-4438-921c-cf3017a7270b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889" podUID="60a4b187-9c7f-4438-921c-cf3017a7270b"
	Nov 01 10:51:23 embed-certs-499088 kubelet[778]: I1101 10:51:23.269805     778 scope.go:117] "RemoveContainer" containerID="e1d26269c43dedd8c98302d9e3982d65d35c7c8b81d14098592ae01842e55e1d"
	Nov 01 10:51:29 embed-certs-499088 kubelet[778]: I1101 10:51:29.106297     778 scope.go:117] "RemoveContainer" containerID="5bedbcf6adb0068c6e314a6cf2ff873b7938d2ba6ec9da57a634909cee70e1fc"
	Nov 01 10:51:29 embed-certs-499088 kubelet[778]: E1101 10:51:29.106493     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fr889_kubernetes-dashboard(60a4b187-9c7f-4438-921c-cf3017a7270b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fr889" podUID="60a4b187-9c7f-4438-921c-cf3017a7270b"
	Nov 01 10:51:39 embed-certs-499088 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:51:39 embed-certs-499088 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:51:39 embed-certs-499088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1052577ace73dab7a4b657cf5e7a7050b89edcf1f440e4859057a775cd3e4d49] <==
	2025/11/01 10:51:01 Using namespace: kubernetes-dashboard
	2025/11/01 10:51:01 Using in-cluster config to connect to apiserver
	2025/11/01 10:51:01 Using secret token for csrf signing
	2025/11/01 10:51:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:51:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:51:01 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:51:01 Generating JWE encryption key
	2025/11/01 10:51:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:51:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:51:01 Initializing JWE encryption key from synchronized object
	2025/11/01 10:51:01 Creating in-cluster Sidecar client
	2025/11/01 10:51:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:51:01 Serving insecurely on HTTP port: 9090
	2025/11/01 10:51:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:51:01 Starting overwatch
	
	
	==> storage-provisioner [7440e8684eb54b0b34c9156026be71a3e8ae7887c9adbe4447104c66b78e0d01] <==
	I1101 10:51:23.399175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:51:23.413792       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:51:23.413912       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:51:23.417664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:26.873630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:31.135569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:34.739284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:37.793027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:40.825913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:40.848981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:51:40.857448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:51:40.858097       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5491653a-fc59-4529-adde-932caf894aba", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-499088_fa8ea202-270f-4ff8-a1b1-1c37831af23e became leader
	I1101 10:51:40.870128       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-499088_fa8ea202-270f-4ff8-a1b1-1c37831af23e!
	W1101 10:51:40.938839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:40.961189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:51:40.973604       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-499088_fa8ea202-270f-4ff8-a1b1-1c37831af23e!
	W1101 10:51:42.965387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:42.972228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:44.975934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:44.980655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e1d26269c43dedd8c98302d9e3982d65d35c7c8b81d14098592ae01842e55e1d] <==
	I1101 10:50:52.644505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:51:22.654279       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-499088 -n embed-certs-499088
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-499088 -n embed-certs-499088: exit status 2 (519.12009ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-499088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-548708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-548708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (281.238142ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-548708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-548708 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-548708 describe deploy/metrics-server -n kube-system: exit status 1 (93.77697ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-548708 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-548708
helpers_test.go:243: (dbg) docker inspect no-preload-548708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e",
	        "Created": "2025-11-01T10:51:12.134501468Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492146,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:51:12.214081172Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/hostname",
	        "HostsPath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/hosts",
	        "LogPath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e-json.log",
	        "Name": "/no-preload-548708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-548708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-548708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e",
	                "LowerDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-548708",
	                "Source": "/var/lib/docker/volumes/no-preload-548708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-548708",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-548708",
	                "name.minikube.sigs.k8s.io": "no-preload-548708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5fef160481bcee5ce07907604a988b65b4f653db710687e2162c23257bb0b95d",
	            "SandboxKey": "/var/run/docker/netns/5fef160481bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-548708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:68:9c:fb:cc:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "458d9289c1e4678d575d4635bc902fe82bbd4c6f42dd0c954078044d50841590",
	                    "EndpointID": "03dcb7abf5fab13cee8a85aae28767e2b23c8ca6031d79fb5bedf5d1ef971a4e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-548708",
	                        "965e3c07903f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-548708 -n no-preload-548708
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-548708 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-548708 logs -n 25: (1.325190037s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-245622                                                                                                                                                                                                                     │ old-k8s-version-245622       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p cert-expiration-308600                                                                                                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-014050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-014050 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-014050 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ stop    │ -p embed-certs-499088 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable dashboard -p embed-certs-499088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:51 UTC │
	│ image   │ default-k8s-diff-port-014050 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ pause   │ -p default-k8s-diff-port-014050 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p disable-driver-mounts-514829                                                                                                                                                                                                               │ disable-driver-mounts-514829 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:52 UTC │
	│ image   │ embed-certs-499088 image list --format=json                                                                                                                                                                                                   │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ pause   │ -p embed-certs-499088 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-548708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:51:50
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:51:50.776199  495968 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:50.776316  495968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:50.776364  495968 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:50.776369  495968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:50.776620  495968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:51:50.777173  495968 out.go:368] Setting JSON to false
	I1101 10:51:50.783273  495968 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9263,"bootTime":1761985048,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:51:50.783351  495968 start.go:143] virtualization:  
	I1101 10:51:50.789389  495968 out.go:179] * [newest-cni-196911] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:51:50.792985  495968 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:51:50.793022  495968 notify.go:221] Checking for updates...
	I1101 10:51:50.799419  495968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:51:50.802748  495968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:51:50.805970  495968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:51:50.809231  495968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:51:50.812342  495968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:51:50.816151  495968 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:50.816273  495968 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:51:50.872550  495968 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:51:50.872692  495968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:51:51.012423  495968 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:51:50.994243189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:51:51.012561  495968 docker.go:319] overlay module found
	I1101 10:51:51.015919  495968 out.go:179] * Using the docker driver based on user configuration
	I1101 10:51:51.018930  495968 start.go:309] selected driver: docker
	I1101 10:51:51.018956  495968 start.go:930] validating driver "docker" against <nil>
	I1101 10:51:51.018972  495968 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:51:51.019821  495968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:51:51.138839  495968 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:51:51.121925337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:51:51.139049  495968 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 10:51:51.139107  495968 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 10:51:51.141246  495968 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:51:51.144514  495968 out.go:179] * Using Docker driver with root privileges
	I1101 10:51:51.147481  495968 cni.go:84] Creating CNI manager for ""
	I1101 10:51:51.147593  495968 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:51:51.147612  495968 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:51:51.147737  495968 start.go:353] cluster config:
	{Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:51:51.153082  495968 out.go:179] * Starting "newest-cni-196911" primary control-plane node in "newest-cni-196911" cluster
	I1101 10:51:51.156055  495968 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:51:51.159247  495968 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:51:51.162160  495968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:51:51.162210  495968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:51:51.162565  495968 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:51:51.162584  495968 cache.go:59] Caching tarball of preloaded images
	I1101 10:51:51.162692  495968 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:51:51.162707  495968 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:51:51.162857  495968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/config.json ...
	I1101 10:51:51.162889  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/config.json: {Name:mk19e2de0488f12059f0b5c1a3b77ee10ddaa055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:51.201585  495968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:51:51.201617  495968 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:51:51.201634  495968 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:51:51.201662  495968 start.go:360] acquireMachinesLock for newest-cni-196911: {Name:mk5d13c3ab821736ff221679ae614a306353c01c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:51.201797  495968 start.go:364] duration metric: took 108.94µs to acquireMachinesLock for "newest-cni-196911"
	I1101 10:51:51.201843  495968 start.go:93] Provisioning new machine with config: &{Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:51:51.201927  495968 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:51:51.205414  495968 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:51:51.205684  495968 start.go:159] libmachine.API.Create for "newest-cni-196911" (driver="docker")
	I1101 10:51:51.205767  495968 client.go:173] LocalClient.Create starting
	I1101 10:51:51.205892  495968 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem
	I1101 10:51:51.205953  495968 main.go:143] libmachine: Decoding PEM data...
	I1101 10:51:51.205974  495968 main.go:143] libmachine: Parsing certificate...
	I1101 10:51:51.206050  495968 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem
	I1101 10:51:51.206093  495968 main.go:143] libmachine: Decoding PEM data...
	I1101 10:51:51.206109  495968 main.go:143] libmachine: Parsing certificate...
	I1101 10:51:51.206665  495968 cli_runner.go:164] Run: docker network inspect newest-cni-196911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:51:51.235630  495968 cli_runner.go:211] docker network inspect newest-cni-196911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:51:51.235748  495968 network_create.go:284] running [docker network inspect newest-cni-196911] to gather additional debugging logs...
	I1101 10:51:51.235774  495968 cli_runner.go:164] Run: docker network inspect newest-cni-196911
	W1101 10:51:51.267179  495968 cli_runner.go:211] docker network inspect newest-cni-196911 returned with exit code 1
	I1101 10:51:51.267241  495968 network_create.go:287] error running [docker network inspect newest-cni-196911]: docker network inspect newest-cni-196911: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-196911 not found
	I1101 10:51:51.267255  495968 network_create.go:289] output of [docker network inspect newest-cni-196911]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-196911 not found
	
	** /stderr **
	I1101 10:51:51.267403  495968 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:51:51.319301  495968 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5e2665991a3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:25:1a:f9:12:ec} reservation:<nil>}
	I1101 10:51:51.319827  495968 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-adecbbb769f0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:b0:b5:2e:4c:30} reservation:<nil>}
	I1101 10:51:51.320140  495968 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2077d26d1806 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:49:68:b6:9e:fb} reservation:<nil>}
	I1101 10:51:51.320676  495968 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d91a0}
	I1101 10:51:51.320697  495968 network_create.go:124] attempt to create docker network newest-cni-196911 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 10:51:51.320766  495968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-196911 newest-cni-196911
	I1101 10:51:51.448575  495968 network_create.go:108] docker network newest-cni-196911 192.168.76.0/24 created
	I1101 10:51:51.448614  495968 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-196911" container
	I1101 10:51:51.448719  495968 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:51:51.476625  495968 cli_runner.go:164] Run: docker volume create newest-cni-196911 --label name.minikube.sigs.k8s.io=newest-cni-196911 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:51:51.518979  495968 oci.go:103] Successfully created a docker volume newest-cni-196911
	I1101 10:51:51.519107  495968 cli_runner.go:164] Run: docker run --rm --name newest-cni-196911-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-196911 --entrypoint /usr/bin/test -v newest-cni-196911:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:51:52.299885  495968 oci.go:107] Successfully prepared a docker volume newest-cni-196911
	I1101 10:51:52.299928  495968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:51:52.299962  495968 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:51:52.300033  495968 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-196911:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 10:51:57.519316  491840 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:51:57.519378  491840 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:51:57.519501  491840 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:51:57.519580  491840 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:51:57.519622  491840 kubeadm.go:319] OS: Linux
	I1101 10:51:57.519716  491840 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:51:57.519777  491840 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:51:57.519828  491840 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:51:57.519901  491840 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:51:57.519966  491840 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:51:57.520038  491840 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:51:57.520111  491840 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:51:57.520191  491840 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:51:57.520260  491840 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:51:57.520360  491840 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:51:57.520473  491840 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:51:57.520569  491840 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:51:57.520654  491840 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:51:57.538781  491840 out.go:252]   - Generating certificates and keys ...
	I1101 10:51:57.538961  491840 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:51:57.539040  491840 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:51:57.539126  491840 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:51:57.539196  491840 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:51:57.539273  491840 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:51:57.539339  491840 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:51:57.539404  491840 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:51:57.539611  491840 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-548708] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:51:57.539710  491840 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:51:57.539926  491840 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-548708] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:51:57.540021  491840 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:51:57.540120  491840 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:51:57.540200  491840 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:51:57.540292  491840 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:51:57.540412  491840 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:51:57.540506  491840 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:51:57.540582  491840 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:51:57.540661  491840 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:51:57.540720  491840 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:51:57.540806  491840 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:51:57.540889  491840 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:51:57.572023  491840 out.go:252]   - Booting up control plane ...
	I1101 10:51:57.572191  491840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:51:57.572275  491840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:51:57.572345  491840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:51:57.572453  491840 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:51:57.572551  491840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:51:57.572661  491840 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:51:57.572748  491840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:51:57.572790  491840 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:51:57.572962  491840 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:51:57.573144  491840 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:51:57.573223  491840 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002275161s
	I1101 10:51:57.573342  491840 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:51:57.573431  491840 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 10:51:57.573531  491840 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:51:57.573620  491840 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:51:57.573702  491840 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.597518581s
	I1101 10:51:57.573816  491840 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.724238237s
	I1101 10:51:57.573934  491840 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.002763344s
	I1101 10:51:57.574048  491840 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:51:57.574208  491840 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:51:57.574290  491840 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:51:57.574511  491840 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-548708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:51:57.574577  491840 kubeadm.go:319] [bootstrap-token] Using token: 01jztw.u51d6r2k2lvew2ci
	I1101 10:51:57.635197  491840 out.go:252]   - Configuring RBAC rules ...
	I1101 10:51:57.635335  491840 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:51:57.635451  491840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:51:57.635618  491840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:51:57.635762  491840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:51:57.635902  491840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:51:57.635999  491840 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:51:57.636123  491840 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:51:57.636173  491840 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:51:57.636225  491840 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:51:57.636233  491840 kubeadm.go:319] 
	I1101 10:51:57.636296  491840 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:51:57.636306  491840 kubeadm.go:319] 
	I1101 10:51:57.636387  491840 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:51:57.636396  491840 kubeadm.go:319] 
	I1101 10:51:57.636423  491840 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:51:57.636488  491840 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:51:57.636558  491840 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:51:57.636567  491840 kubeadm.go:319] 
	I1101 10:51:57.636624  491840 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:51:57.636628  491840 kubeadm.go:319] 
	I1101 10:51:57.636678  491840 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:51:57.636682  491840 kubeadm.go:319] 
	I1101 10:51:57.636736  491840 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:51:57.636821  491840 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:51:57.636892  491840 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:51:57.636897  491840 kubeadm.go:319] 
	I1101 10:51:57.637039  491840 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:51:57.637122  491840 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:51:57.637133  491840 kubeadm.go:319] 
	I1101 10:51:57.637221  491840 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 01jztw.u51d6r2k2lvew2ci \
	I1101 10:51:57.637333  491840 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 \
	I1101 10:51:57.637359  491840 kubeadm.go:319] 	--control-plane 
	I1101 10:51:57.637367  491840 kubeadm.go:319] 
	I1101 10:51:57.637456  491840 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:51:57.637464  491840 kubeadm.go:319] 
	I1101 10:51:57.637550  491840 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 01jztw.u51d6r2k2lvew2ci \
	I1101 10:51:57.637682  491840 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 
	I1101 10:51:57.637695  491840 cni.go:84] Creating CNI manager for ""
	I1101 10:51:57.637703  491840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:51:57.667830  491840 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:51:57.770457  495968 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-196911:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.470362219s)
	I1101 10:51:57.770483  495968 kic.go:203] duration metric: took 5.470518644s to extract preloaded images to volume ...
	W1101 10:51:57.770607  495968 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:51:57.770723  495968 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:51:57.884428  495968 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-196911 --name newest-cni-196911 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-196911 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-196911 --network newest-cni-196911 --ip 192.168.76.2 --volume newest-cni-196911:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:51:58.237091  495968 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Running}}
	I1101 10:51:58.259891  495968 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:51:58.285698  495968 cli_runner.go:164] Run: docker exec newest-cni-196911 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:51:58.356198  495968 oci.go:144] the created container "newest-cni-196911" has a running status.
	I1101 10:51:58.356226  495968 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa...
	I1101 10:51:59.016028  495968 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:51:59.045678  495968 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:51:59.063752  495968 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:51:59.063777  495968 kic_runner.go:114] Args: [docker exec --privileged newest-cni-196911 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:51:59.137075  495968 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:51:59.165974  495968 machine.go:94] provisionDockerMachine start ...
	I1101 10:51:59.166072  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:51:59.202420  495968 main.go:143] libmachine: Using SSH client type: native
	I1101 10:51:59.202753  495968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1101 10:51:59.202770  495968 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:51:59.205725  495968 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:51:57.700690  491840 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:51:57.706446  491840 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:51:57.706467  491840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:51:57.726498  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:51:58.276260  491840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:51:58.276388  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:51:58.276455  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-548708 minikube.k8s.io/updated_at=2025_11_01T10_51_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=no-preload-548708 minikube.k8s.io/primary=true
	I1101 10:51:58.819613  491840 ops.go:34] apiserver oom_adj: -16
	I1101 10:51:58.819756  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:51:59.320685  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:51:59.819810  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:00.323929  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:00.820313  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:01.320305  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:01.819829  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:02.320373  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:02.547886  491840 kubeadm.go:1114] duration metric: took 4.271542799s to wait for elevateKubeSystemPrivileges
	I1101 10:52:02.547922  491840 kubeadm.go:403] duration metric: took 25.645173887s to StartCluster
	I1101 10:52:02.547942  491840 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:02.548001  491840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:02.548679  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:02.548905  491840 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:52:02.549036  491840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:52:02.549277  491840 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:02.549318  491840 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:52:02.549382  491840 addons.go:70] Setting storage-provisioner=true in profile "no-preload-548708"
	I1101 10:52:02.549397  491840 addons.go:239] Setting addon storage-provisioner=true in "no-preload-548708"
	I1101 10:52:02.549421  491840 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:52:02.550010  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:02.550434  491840 addons.go:70] Setting default-storageclass=true in profile "no-preload-548708"
	I1101 10:52:02.550458  491840 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-548708"
	I1101 10:52:02.550830  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:02.554062  491840 out.go:179] * Verifying Kubernetes components...
	I1101 10:52:02.558515  491840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:02.583894  491840 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:52:02.591041  491840 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:02.591069  491840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:52:02.591136  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:02.602314  491840 addons.go:239] Setting addon default-storageclass=true in "no-preload-548708"
	I1101 10:52:02.602359  491840 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:52:02.602773  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:02.630722  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:02.644724  491840 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:02.644748  491840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:52:02.644814  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:02.678780  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:02.954535  491840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:52:03.072917  491840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:03.099889  491840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:03.113777  491840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:03.570608  491840 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:52:03.572313  491840 node_ready.go:35] waiting up to 6m0s for node "no-preload-548708" to be "Ready" ...
	I1101 10:52:03.957452  491840 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:52:02.389001  495968 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-196911
	
	I1101 10:52:02.389025  495968 ubuntu.go:182] provisioning hostname "newest-cni-196911"
	I1101 10:52:02.389090  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:02.417318  495968 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:02.417675  495968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1101 10:52:02.417696  495968 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-196911 && echo "newest-cni-196911" | sudo tee /etc/hostname
	I1101 10:52:02.652362  495968 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-196911
	
	I1101 10:52:02.652444  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:02.696979  495968 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:02.697287  495968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1101 10:52:02.697308  495968 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-196911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-196911/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-196911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:52:02.894836  495968 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:52:02.894867  495968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:52:02.894885  495968 ubuntu.go:190] setting up certificates
	I1101 10:52:02.894895  495968 provision.go:84] configureAuth start
	I1101 10:52:02.894972  495968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-196911
	I1101 10:52:02.937145  495968 provision.go:143] copyHostCerts
	I1101 10:52:02.937209  495968 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:52:02.937218  495968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:52:02.937298  495968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:52:02.937395  495968 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:52:02.937400  495968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:52:02.937426  495968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:52:02.937521  495968 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:52:02.937525  495968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:52:02.937549  495968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:52:02.937606  495968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.newest-cni-196911 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-196911]
	I1101 10:52:03.262926  495968 provision.go:177] copyRemoteCerts
	I1101 10:52:03.263043  495968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:52:03.263128  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:03.281438  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:03.411099  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:52:03.435146  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:52:03.463925  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:52:03.492293  495968 provision.go:87] duration metric: took 597.375789ms to configureAuth
	I1101 10:52:03.492368  495968 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:52:03.492610  495968 config.go:182] Loaded profile config "newest-cni-196911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:03.492777  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:03.525209  495968 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:03.525521  495968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1101 10:52:03.525536  495968 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:52:03.893449  495968 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:52:03.893473  495968 machine.go:97] duration metric: took 4.727479027s to provisionDockerMachine
	I1101 10:52:03.893483  495968 client.go:176] duration metric: took 12.6876879s to LocalClient.Create
	I1101 10:52:03.893511  495968 start.go:167] duration metric: took 12.687814614s to libmachine.API.Create "newest-cni-196911"
	I1101 10:52:03.893522  495968 start.go:293] postStartSetup for "newest-cni-196911" (driver="docker")
	I1101 10:52:03.893533  495968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:52:03.893604  495968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:52:03.893655  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:03.919500  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:04.033860  495968 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:52:04.037450  495968 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:52:04.037483  495968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:52:04.037496  495968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:52:04.037554  495968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:52:04.037639  495968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:52:04.037765  495968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:52:04.045716  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:52:04.064587  495968 start.go:296] duration metric: took 171.049641ms for postStartSetup
	I1101 10:52:04.065041  495968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-196911
	I1101 10:52:04.088128  495968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/config.json ...
	I1101 10:52:04.088498  495968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:52:04.088543  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:04.105559  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:04.206271  495968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:52:04.211086  495968 start.go:128] duration metric: took 13.009144388s to createHost
	I1101 10:52:04.211111  495968 start.go:83] releasing machines lock for "newest-cni-196911", held for 13.009300222s
	I1101 10:52:04.211181  495968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-196911
	I1101 10:52:04.228598  495968 ssh_runner.go:195] Run: cat /version.json
	I1101 10:52:04.228630  495968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:52:04.228653  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:04.228693  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:04.253488  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:04.262526  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:04.469261  495968 ssh_runner.go:195] Run: systemctl --version
	I1101 10:52:04.476285  495968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:52:04.512514  495968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:52:04.517187  495968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:52:04.517263  495968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:52:04.562482  495968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:52:04.562520  495968 start.go:496] detecting cgroup driver to use...
	I1101 10:52:04.562571  495968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:52:04.562649  495968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:52:04.591652  495968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:52:04.614618  495968 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:52:04.614703  495968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:52:04.646460  495968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:52:04.681199  495968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:52:04.833957  495968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:52:05.017048  495968 docker.go:234] disabling docker service ...
	I1101 10:52:05.017173  495968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:52:05.049815  495968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:52:05.079377  495968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:52:05.309674  495968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:52:05.523785  495968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:52:05.547030  495968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:52:05.586845  495968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:52:05.586966  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.599849  495968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:52:05.599996  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.618411  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.631973  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.653570  495968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:52:05.678760  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.690164  495968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.710104  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.738921  495968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:52:05.753444  495968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:52:05.762788  495968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:03.962834  491840 addons.go:515] duration metric: took 1.413482327s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:52:04.075093  491840 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-548708" context rescaled to 1 replicas
	W1101 10:52:05.579505  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	I1101 10:52:05.918163  495968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:52:06.124441  495968 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:52:06.124563  495968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:52:06.133621  495968 start.go:564] Will wait 60s for crictl version
	I1101 10:52:06.133739  495968 ssh_runner.go:195] Run: which crictl
	I1101 10:52:06.140673  495968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:52:06.172612  495968 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:52:06.172735  495968 ssh_runner.go:195] Run: crio --version
	I1101 10:52:06.215098  495968 ssh_runner.go:195] Run: crio --version
	I1101 10:52:06.261153  495968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:52:06.264423  495968 cli_runner.go:164] Run: docker network inspect newest-cni-196911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:52:06.285653  495968 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:52:06.289899  495968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:52:06.305875  495968 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:52:06.308853  495968 kubeadm.go:884] updating cluster {Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:52:06.309003  495968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:52:06.309094  495968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:52:06.345085  495968 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:52:06.345110  495968 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:52:06.345166  495968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:52:06.376173  495968 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:52:06.376198  495968 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:52:06.376207  495968 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:52:06.376408  495968 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-196911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:52:06.376517  495968 ssh_runner.go:195] Run: crio config
	I1101 10:52:06.466187  495968 cni.go:84] Creating CNI manager for ""
	I1101 10:52:06.466213  495968 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:52:06.466225  495968 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:52:06.466257  495968 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-196911 NodeName:newest-cni-196911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:52:06.466391  495968 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-196911"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:52:06.466467  495968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:52:06.476156  495968 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:52:06.476257  495968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:52:06.484902  495968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:52:06.498196  495968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:52:06.519280  495968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 10:52:06.542847  495968 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:52:06.547177  495968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:52:06.564426  495968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:06.751042  495968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:06.775380  495968 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911 for IP: 192.168.76.2
	I1101 10:52:06.775402  495968 certs.go:195] generating shared ca certs ...
	I1101 10:52:06.775419  495968 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:06.775628  495968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:52:06.775716  495968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:52:06.775744  495968 certs.go:257] generating profile certs ...
	I1101 10:52:06.775832  495968 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.key
	I1101 10:52:06.775852  495968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.crt with IP's: []
	I1101 10:52:07.134538  495968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.crt ...
	I1101 10:52:07.134569  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.crt: {Name:mkfabc42af4f9288372d5f946b09cb224920816d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:07.134819  495968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.key ...
	I1101 10:52:07.134837  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.key: {Name:mkcec546c03779944f6e824473ded36c36323270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:07.134964  495968 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key.415499af
	I1101 10:52:07.134987  495968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt.415499af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:52:07.547359  495968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt.415499af ...
	I1101 10:52:07.547401  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt.415499af: {Name:mkf434af126169d8ca18f549cfc9c7b8a5cd4e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:07.547616  495968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key.415499af ...
	I1101 10:52:07.547636  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key.415499af: {Name:mk597bebd4b24bb541a874864b7c2181f5bfc86e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:07.547772  495968 certs.go:382] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt.415499af -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt
	I1101 10:52:07.547899  495968 certs.go:386] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key.415499af -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key
	I1101 10:52:07.547993  495968 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key
	I1101 10:52:07.548037  495968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.crt with IP's: []
	I1101 10:52:08.150909  495968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.crt ...
	I1101 10:52:08.150980  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.crt: {Name:mk5205b90e0005522b102a14c674d86f0e990463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:08.151169  495968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key ...
	I1101 10:52:08.151206  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key: {Name:mk3e386bbb001ff416ace78c17a97792f552575a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:08.151439  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:52:08.151508  495968 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:52:08.151534  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:52:08.151597  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:52:08.151652  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:52:08.151709  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:52:08.151796  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:52:08.152464  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:52:08.170688  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:52:08.189233  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:52:08.207783  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:52:08.225581  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:52:08.242702  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:52:08.260144  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:52:08.277781  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:52:08.296874  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:52:08.316098  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:52:08.335705  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:52:08.355793  495968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:52:08.369871  495968 ssh_runner.go:195] Run: openssl version
	I1101 10:52:08.376256  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:52:08.385794  495968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:08.390075  495968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:08.390229  495968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:08.449780  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:52:08.459273  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:52:08.468852  495968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:52:08.479866  495968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:52:08.479979  495968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:52:08.534284  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:52:08.544740  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:52:08.554393  495968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:52:08.563254  495968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:52:08.563323  495968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:52:08.605524  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:52:08.615048  495968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:52:08.618818  495968 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:52:08.618871  495968 kubeadm.go:401] StartCluster: {Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:52:08.618959  495968 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:52:08.619022  495968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:52:08.664453  495968 cri.go:89] found id: ""
	I1101 10:52:08.664551  495968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:52:08.675189  495968 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:52:08.683591  495968 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:52:08.683656  495968 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:52:08.694436  495968 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:52:08.694456  495968 kubeadm.go:158] found existing configuration files:
	
	I1101 10:52:08.694505  495968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:52:08.703756  495968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:52:08.703820  495968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:52:08.712389  495968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:52:08.726171  495968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:52:08.726244  495968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:52:08.736827  495968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:52:08.746467  495968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:52:08.746536  495968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:52:08.754370  495968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:52:08.763269  495968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:52:08.763338  495968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:52:08.771113  495968 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:52:08.818854  495968 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:52:08.819120  495968 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:52:08.849605  495968 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:52:08.849720  495968 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:52:08.849764  495968 kubeadm.go:319] OS: Linux
	I1101 10:52:08.849814  495968 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:52:08.849868  495968 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:52:08.849921  495968 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:52:08.849975  495968 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:52:08.850031  495968 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:52:08.850086  495968 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:52:08.850138  495968 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:52:08.850193  495968 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:52:08.850249  495968 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:52:08.929712  495968 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:52:08.929831  495968 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:52:08.929933  495968 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:52:08.944401  495968 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:52:08.949801  495968 out.go:252]   - Generating certificates and keys ...
	I1101 10:52:08.949902  495968 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:52:08.949979  495968 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:52:09.510678  495968 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1101 10:52:08.075848  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	W1101 10:52:10.077364  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	I1101 10:52:10.903281  495968 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:52:11.227686  495968 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:52:12.054481  495968 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:52:13.747095  495968 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:52:13.747520  495968 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-196911] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:52:13.850513  495968 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:52:13.850939  495968 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-196911] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:52:13.922666  495968 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:52:14.259895  495968 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:52:14.657499  495968 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:52:14.657582  495968 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:52:15.336979  495968 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	W1101 10:52:12.576695  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	W1101 10:52:15.078037  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	I1101 10:52:16.377162  495968 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:52:16.919110  495968 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:52:17.956801  495968 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:52:18.427370  495968 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:52:18.428150  495968 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:52:18.430923  495968 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:52:18.434401  495968 out.go:252]   - Booting up control plane ...
	I1101 10:52:18.434533  495968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:52:18.434624  495968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:52:18.436272  495968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:52:18.458037  495968 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:52:18.458150  495968 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:52:18.466539  495968 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:52:18.466846  495968 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:52:18.467102  495968 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:52:18.611137  495968 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:52:18.611269  495968 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:52:20.612541  495968 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001696s
	I1101 10:52:20.616103  495968 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:52:20.616209  495968 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 10:52:20.616540  495968 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:52:20.616631  495968 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 10:52:17.577319  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	I1101 10:52:19.577070  491840 node_ready.go:49] node "no-preload-548708" is "Ready"
	I1101 10:52:19.577114  491840 node_ready.go:38] duration metric: took 16.004759435s for node "no-preload-548708" to be "Ready" ...
	I1101 10:52:19.577142  491840 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:52:19.577229  491840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:52:19.611704  491840 api_server.go:72] duration metric: took 17.062737564s to wait for apiserver process to appear ...
	I1101 10:52:19.611739  491840 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:52:19.611778  491840 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:52:19.629454  491840 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:52:19.631018  491840 api_server.go:141] control plane version: v1.34.1
	I1101 10:52:19.631057  491840 api_server.go:131] duration metric: took 19.304442ms to wait for apiserver health ...
	I1101 10:52:19.631066  491840 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:52:19.636550  491840 system_pods.go:59] 8 kube-system pods found
	I1101 10:52:19.636661  491840 system_pods.go:61] "coredns-66bc5c9577-dt2gw" [45a2b863-ccf3-4449-b46c-d5d1ccb4a618] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:52:19.636697  491840 system_pods.go:61] "etcd-no-preload-548708" [9d24db53-83d3-4bc3-98f6-f2b64efdb17e] Running
	I1101 10:52:19.636778  491840 system_pods.go:61] "kindnet-mwwlc" [54530b5e-c2c7-4767-8207-d7ecefdc464e] Running
	I1101 10:52:19.636820  491840 system_pods.go:61] "kube-apiserver-no-preload-548708" [bd3bb490-a0ec-42fe-92dc-fd9d35ae09d6] Running
	I1101 10:52:19.636845  491840 system_pods.go:61] "kube-controller-manager-no-preload-548708" [50018a7e-c81d-4280-85db-47d13f403fa5] Running
	I1101 10:52:19.636877  491840 system_pods.go:61] "kube-proxy-m7vxc" [988bbedc-207d-455d-8e07-24e37391bacc] Running
	I1101 10:52:19.636960  491840 system_pods.go:61] "kube-scheduler-no-preload-548708" [e9354f94-f9ab-4e1b-b640-dd5bffc51024] Running
	I1101 10:52:19.637012  491840 system_pods.go:61] "storage-provisioner" [08052f57-4f29-45d0-9176-0e7cd8817cce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:52:19.637039  491840 system_pods.go:74] duration metric: took 5.966127ms to wait for pod list to return data ...
	I1101 10:52:19.637105  491840 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:52:19.642271  491840 default_sa.go:45] found service account: "default"
	I1101 10:52:19.642360  491840 default_sa.go:55] duration metric: took 5.220479ms for default service account to be created ...
	I1101 10:52:19.642401  491840 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:52:19.650276  491840 system_pods.go:86] 8 kube-system pods found
	I1101 10:52:19.650384  491840 system_pods.go:89] "coredns-66bc5c9577-dt2gw" [45a2b863-ccf3-4449-b46c-d5d1ccb4a618] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:52:19.650408  491840 system_pods.go:89] "etcd-no-preload-548708" [9d24db53-83d3-4bc3-98f6-f2b64efdb17e] Running
	I1101 10:52:19.650451  491840 system_pods.go:89] "kindnet-mwwlc" [54530b5e-c2c7-4767-8207-d7ecefdc464e] Running
	I1101 10:52:19.650477  491840 system_pods.go:89] "kube-apiserver-no-preload-548708" [bd3bb490-a0ec-42fe-92dc-fd9d35ae09d6] Running
	I1101 10:52:19.650521  491840 system_pods.go:89] "kube-controller-manager-no-preload-548708" [50018a7e-c81d-4280-85db-47d13f403fa5] Running
	I1101 10:52:19.650548  491840 system_pods.go:89] "kube-proxy-m7vxc" [988bbedc-207d-455d-8e07-24e37391bacc] Running
	I1101 10:52:19.650568  491840 system_pods.go:89] "kube-scheduler-no-preload-548708" [e9354f94-f9ab-4e1b-b640-dd5bffc51024] Running
	I1101 10:52:19.650602  491840 system_pods.go:89] "storage-provisioner" [08052f57-4f29-45d0-9176-0e7cd8817cce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:52:19.650656  491840 retry.go:31] will retry after 283.836728ms: missing components: kube-dns
	I1101 10:52:19.943183  491840 system_pods.go:86] 8 kube-system pods found
	I1101 10:52:19.943267  491840 system_pods.go:89] "coredns-66bc5c9577-dt2gw" [45a2b863-ccf3-4449-b46c-d5d1ccb4a618] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:52:19.943291  491840 system_pods.go:89] "etcd-no-preload-548708" [9d24db53-83d3-4bc3-98f6-f2b64efdb17e] Running
	I1101 10:52:19.943330  491840 system_pods.go:89] "kindnet-mwwlc" [54530b5e-c2c7-4767-8207-d7ecefdc464e] Running
	I1101 10:52:19.943353  491840 system_pods.go:89] "kube-apiserver-no-preload-548708" [bd3bb490-a0ec-42fe-92dc-fd9d35ae09d6] Running
	I1101 10:52:19.943375  491840 system_pods.go:89] "kube-controller-manager-no-preload-548708" [50018a7e-c81d-4280-85db-47d13f403fa5] Running
	I1101 10:52:19.943410  491840 system_pods.go:89] "kube-proxy-m7vxc" [988bbedc-207d-455d-8e07-24e37391bacc] Running
	I1101 10:52:19.943432  491840 system_pods.go:89] "kube-scheduler-no-preload-548708" [e9354f94-f9ab-4e1b-b640-dd5bffc51024] Running
	I1101 10:52:19.943453  491840 system_pods.go:89] "storage-provisioner" [08052f57-4f29-45d0-9176-0e7cd8817cce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:52:19.943502  491840 retry.go:31] will retry after 352.86041ms: missing components: kube-dns
	I1101 10:52:20.303087  491840 system_pods.go:86] 8 kube-system pods found
	I1101 10:52:20.303169  491840 system_pods.go:89] "coredns-66bc5c9577-dt2gw" [45a2b863-ccf3-4449-b46c-d5d1ccb4a618] Running
	I1101 10:52:20.303192  491840 system_pods.go:89] "etcd-no-preload-548708" [9d24db53-83d3-4bc3-98f6-f2b64efdb17e] Running
	I1101 10:52:20.303239  491840 system_pods.go:89] "kindnet-mwwlc" [54530b5e-c2c7-4767-8207-d7ecefdc464e] Running
	I1101 10:52:20.303290  491840 system_pods.go:89] "kube-apiserver-no-preload-548708" [bd3bb490-a0ec-42fe-92dc-fd9d35ae09d6] Running
	I1101 10:52:20.303327  491840 system_pods.go:89] "kube-controller-manager-no-preload-548708" [50018a7e-c81d-4280-85db-47d13f403fa5] Running
	I1101 10:52:20.303352  491840 system_pods.go:89] "kube-proxy-m7vxc" [988bbedc-207d-455d-8e07-24e37391bacc] Running
	I1101 10:52:20.303374  491840 system_pods.go:89] "kube-scheduler-no-preload-548708" [e9354f94-f9ab-4e1b-b640-dd5bffc51024] Running
	I1101 10:52:20.303410  491840 system_pods.go:89] "storage-provisioner" [08052f57-4f29-45d0-9176-0e7cd8817cce] Running
	I1101 10:52:20.303446  491840 system_pods.go:126] duration metric: took 661.00708ms to wait for k8s-apps to be running ...
	I1101 10:52:20.303484  491840 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:52:20.303571  491840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:52:20.320119  491840 system_svc.go:56] duration metric: took 16.62669ms WaitForService to wait for kubelet
	I1101 10:52:20.320196  491840 kubeadm.go:587] duration metric: took 17.77123938s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:52:20.320233  491840 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:52:20.332056  491840 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:52:20.332139  491840 node_conditions.go:123] node cpu capacity is 2
	I1101 10:52:20.332175  491840 node_conditions.go:105] duration metric: took 11.895963ms to run NodePressure ...
	I1101 10:52:20.332219  491840 start.go:242] waiting for startup goroutines ...
	I1101 10:52:20.332245  491840 start.go:247] waiting for cluster config update ...
	I1101 10:52:20.332272  491840 start.go:256] writing updated cluster config ...
	I1101 10:52:20.332624  491840 ssh_runner.go:195] Run: rm -f paused
	I1101 10:52:20.337340  491840 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:52:20.341528  491840 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dt2gw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.346802  491840 pod_ready.go:94] pod "coredns-66bc5c9577-dt2gw" is "Ready"
	I1101 10:52:20.346876  491840 pod_ready.go:86] duration metric: took 5.270934ms for pod "coredns-66bc5c9577-dt2gw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.355391  491840 pod_ready.go:83] waiting for pod "etcd-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.361793  491840 pod_ready.go:94] pod "etcd-no-preload-548708" is "Ready"
	I1101 10:52:20.361871  491840 pod_ready.go:86] duration metric: took 6.401414ms for pod "etcd-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.367890  491840 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.374877  491840 pod_ready.go:94] pod "kube-apiserver-no-preload-548708" is "Ready"
	I1101 10:52:20.374954  491840 pod_ready.go:86] duration metric: took 6.999245ms for pod "kube-apiserver-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.377728  491840 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.741274  491840 pod_ready.go:94] pod "kube-controller-manager-no-preload-548708" is "Ready"
	I1101 10:52:20.741302  491840 pod_ready.go:86] duration metric: took 363.510866ms for pod "kube-controller-manager-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.941605  491840 pod_ready.go:83] waiting for pod "kube-proxy-m7vxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:21.341892  491840 pod_ready.go:94] pod "kube-proxy-m7vxc" is "Ready"
	I1101 10:52:21.341922  491840 pod_ready.go:86] duration metric: took 400.290548ms for pod "kube-proxy-m7vxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:21.542355  491840 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:21.941802  491840 pod_ready.go:94] pod "kube-scheduler-no-preload-548708" is "Ready"
	I1101 10:52:21.941831  491840 pod_ready.go:86] duration metric: took 399.447529ms for pod "kube-scheduler-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:21.941852  491840 pod_ready.go:40] duration metric: took 1.604433959s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:52:22.037462  491840 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:52:22.040530  491840 out.go:179] * Done! kubectl is now configured to use "no-preload-548708" cluster and "default" namespace by default
	I1101 10:52:23.563664  495968 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.947167852s
	I1101 10:52:25.614431  495968 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.998286806s
	I1101 10:52:27.618900  495968 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002681771s
	I1101 10:52:27.641901  495968 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:52:27.661471  495968 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:52:27.682898  495968 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:52:27.683449  495968 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-196911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:52:27.697650  495968 kubeadm.go:319] [bootstrap-token] Using token: y8j90m.7prm1r2gb3vjle91
	I1101 10:52:27.700549  495968 out.go:252]   - Configuring RBAC rules ...
	I1101 10:52:27.700681  495968 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:52:27.707377  495968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:52:27.718608  495968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:52:27.724501  495968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:52:27.729434  495968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:52:27.749403  495968 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:52:28.030614  495968 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:52:28.483735  495968 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:52:29.030804  495968 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:52:29.031819  495968 kubeadm.go:319] 
	I1101 10:52:29.031907  495968 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:52:29.031919  495968 kubeadm.go:319] 
	I1101 10:52:29.032007  495968 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:52:29.032015  495968 kubeadm.go:319] 
	I1101 10:52:29.032043  495968 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:52:29.032117  495968 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:52:29.032170  495968 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:52:29.032175  495968 kubeadm.go:319] 
	I1101 10:52:29.032231  495968 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:52:29.032236  495968 kubeadm.go:319] 
	I1101 10:52:29.032285  495968 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:52:29.032290  495968 kubeadm.go:319] 
	I1101 10:52:29.032351  495968 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:52:29.032430  495968 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:52:29.032501  495968 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:52:29.032506  495968 kubeadm.go:319] 
	I1101 10:52:29.032593  495968 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:52:29.032673  495968 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:52:29.032678  495968 kubeadm.go:319] 
	I1101 10:52:29.032771  495968 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token y8j90m.7prm1r2gb3vjle91 \
	I1101 10:52:29.032880  495968 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 \
	I1101 10:52:29.032902  495968 kubeadm.go:319] 	--control-plane 
	I1101 10:52:29.032906  495968 kubeadm.go:319] 
	I1101 10:52:29.033023  495968 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:52:29.033030  495968 kubeadm.go:319] 
	I1101 10:52:29.033115  495968 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token y8j90m.7prm1r2gb3vjle91 \
	I1101 10:52:29.033523  495968 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 
	I1101 10:52:29.037725  495968 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:52:29.038011  495968 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:52:29.038159  495968 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:52:29.038196  495968 cni.go:84] Creating CNI manager for ""
	I1101 10:52:29.038206  495968 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:52:29.043219  495968 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:52:29.046107  495968 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:52:29.050250  495968 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:52:29.050275  495968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:52:29.063986  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:52:29.378507  495968 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:52:29.378652  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:29.378728  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-196911 minikube.k8s.io/updated_at=2025_11_01T10_52_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=newest-cni-196911 minikube.k8s.io/primary=true
	I1101 10:52:29.512585  495968 ops.go:34] apiserver oom_adj: -16
	I1101 10:52:29.512697  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:30.016102  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:30.513213  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Nov 01 10:52:19 no-preload-548708 crio[836]: time="2025-11-01T10:52:19.908651291Z" level=info msg="Created container 7a50ed6a5f7568f7120ba6f98569d0b79713301511bfde34285b3cdeb4cbfda7: kube-system/coredns-66bc5c9577-dt2gw/coredns" id=dd3015a7-5353-4c83-9725-88795242f261 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:19 no-preload-548708 crio[836]: time="2025-11-01T10:52:19.910051944Z" level=info msg="Starting container: 7a50ed6a5f7568f7120ba6f98569d0b79713301511bfde34285b3cdeb4cbfda7" id=6e4fdffc-9120-479c-a803-071e9e724933 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:52:19 no-preload-548708 crio[836]: time="2025-11-01T10:52:19.923795071Z" level=info msg="Started container" PID=2487 containerID=7a50ed6a5f7568f7120ba6f98569d0b79713301511bfde34285b3cdeb4cbfda7 description=kube-system/coredns-66bc5c9577-dt2gw/coredns id=6e4fdffc-9120-479c-a803-071e9e724933 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b3b3f51d0f3b8b0322bfd13b9ea46e9f2719054c018db84d3c85a04476f7745a
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.673515984Z" level=info msg="Running pod sandbox: default/busybox/POD" id=cd2f5ba4-3f74-41df-b20c-93cc3c23d2fc name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.673588502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.679123758Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2b2c335e36eb1870301954c89d5713c7af4613878a82df9ff6eabbb072b01d6f UID:a013fa5d-50ef-4b04-996a-c6fd9681d728 NetNS:/var/run/netns/5bbf38cd-dd48-4b85-af6c-11ed6c84c0ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400135a750}] Aliases:map[]}"
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.679161477Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.695007179Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2b2c335e36eb1870301954c89d5713c7af4613878a82df9ff6eabbb072b01d6f UID:a013fa5d-50ef-4b04-996a-c6fd9681d728 NetNS:/var/run/netns/5bbf38cd-dd48-4b85-af6c-11ed6c84c0ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400135a750}] Aliases:map[]}"
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.69516901Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.703828274Z" level=info msg="Ran pod sandbox 2b2c335e36eb1870301954c89d5713c7af4613878a82df9ff6eabbb072b01d6f with infra container: default/busybox/POD" id=cd2f5ba4-3f74-41df-b20c-93cc3c23d2fc name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.705152283Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f3045ad0-ab95-4a67-bfa2-58ac1ac0c6de name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.705283385Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f3045ad0-ab95-4a67-bfa2-58ac1ac0c6de name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.705354968Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f3045ad0-ab95-4a67-bfa2-58ac1ac0c6de name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.709784777Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=beafa85e-5ee6-44e9-826a-8724687697dd name=/runtime.v1.ImageService/PullImage
	Nov 01 10:52:22 no-preload-548708 crio[836]: time="2025-11-01T10:52:22.713033792Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:52:24 no-preload-548708 crio[836]: time="2025-11-01T10:52:24.971901787Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=beafa85e-5ee6-44e9-826a-8724687697dd name=/runtime.v1.ImageService/PullImage
	Nov 01 10:52:24 no-preload-548708 crio[836]: time="2025-11-01T10:52:24.977179342Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7046647d-e0eb-4b30-8cf2-89a5f207716a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:24 no-preload-548708 crio[836]: time="2025-11-01T10:52:24.984072592Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bceb764f-ee79-48b9-a5c6-0659d3aa8717 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:24 no-preload-548708 crio[836]: time="2025-11-01T10:52:24.990101579Z" level=info msg="Creating container: default/busybox/busybox" id=839d327c-d74d-4beb-90cd-6c7dcb58751b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:24 no-preload-548708 crio[836]: time="2025-11-01T10:52:24.990392273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:24 no-preload-548708 crio[836]: time="2025-11-01T10:52:24.999266259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:25 no-preload-548708 crio[836]: time="2025-11-01T10:52:25.00013446Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:25 no-preload-548708 crio[836]: time="2025-11-01T10:52:25.021265061Z" level=info msg="Created container bc09e59678fb8f7fcc19ffab8b106e48ba0263349081d2b34ace47b760ad558d: default/busybox/busybox" id=839d327c-d74d-4beb-90cd-6c7dcb58751b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:25 no-preload-548708 crio[836]: time="2025-11-01T10:52:25.022762404Z" level=info msg="Starting container: bc09e59678fb8f7fcc19ffab8b106e48ba0263349081d2b34ace47b760ad558d" id=44f7de36-3dab-475b-80ec-f556fc306f3c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:52:25 no-preload-548708 crio[836]: time="2025-11-01T10:52:25.032527738Z" level=info msg="Started container" PID=2541 containerID=bc09e59678fb8f7fcc19ffab8b106e48ba0263349081d2b34ace47b760ad558d description=default/busybox/busybox id=44f7de36-3dab-475b-80ec-f556fc306f3c name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b2c335e36eb1870301954c89d5713c7af4613878a82df9ff6eabbb072b01d6f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bc09e59678fb8       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   6 seconds ago       Running             busybox                   0                   2b2c335e36eb1       busybox                                     default
	7a50ed6a5f756       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   b3b3f51d0f3b8       coredns-66bc5c9577-dt2gw                    kube-system
	17f1764c003a7       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   06c65c7d23f68       storage-provisioner                         kube-system
	7521b9cd76461       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   d7c380b06cc66       kindnet-mwwlc                               kube-system
	f7ab4a1ab750d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      27 seconds ago      Running             kube-proxy                0                   d7ddcba122fe9       kube-proxy-m7vxc                            kube-system
	d54f7a3a2db3f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   e175ea81b7c93       kube-scheduler-no-preload-548708            kube-system
	0a50a1cb7bead       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   3693b9cd80c9e       kube-apiserver-no-preload-548708            kube-system
	3ff1717c0a59c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   b39ab159d449f       kube-controller-manager-no-preload-548708   kube-system
	310cebcd94a96       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   2bec4298af84a       etcd-no-preload-548708                      kube-system
	
	
	==> coredns [7a50ed6a5f7568f7120ba6f98569d0b79713301511bfde34285b3cdeb4cbfda7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54911 - 56129 "HINFO IN 8162972380799052201.2983888766304485324. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018345508s
	
	
	==> describe nodes <==
	Name:               no-preload-548708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-548708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=no-preload-548708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_51_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:51:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-548708
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:52:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:52:27 +0000   Sat, 01 Nov 2025 10:51:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:52:27 +0000   Sat, 01 Nov 2025 10:51:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:52:27 +0000   Sat, 01 Nov 2025 10:51:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:52:27 +0000   Sat, 01 Nov 2025 10:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-548708
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0c3a0660-5fd6-454c-a1ce-cbee363950c2
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-dt2gw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-548708                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-mwwlc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-548708             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-548708    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-m7vxc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-548708             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 46s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node no-preload-548708 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node no-preload-548708 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node no-preload-548708 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-548708 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-548708 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-548708 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-548708 event: Registered Node no-preload-548708 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-548708 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[ +42.559605] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:50] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:51] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:52] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [310cebcd94a967ae6938fffb5938cc3435135ed09cbf534d74a32b0c492d4286] <==
	{"level":"warn","ts":"2025-11-01T10:51:51.723072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:51.764979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:51.781619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:51.819286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:51.844816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:51.869675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:51.882290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:51.922197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:51.943172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:51.960895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:51.978471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.008536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.052262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.070775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.102597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.131831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.160443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.182986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.199086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.244907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.281191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.300156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.325685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.359009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:52.523874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47038","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:52:32 up  2:35,  0 user,  load average: 5.25, 4.02, 3.10
	Linux no-preload-548708 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7521b9cd764617824aad39dadc765cdfd60cb81535943192d26d694ff3f244d3] <==
	I1101 10:52:08.826839       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:52:08.827083       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:52:08.827206       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:52:08.827225       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:52:08.827239       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:52:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:52:09.028035       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:52:09.028113       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:52:09.028150       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:52:09.029395       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:52:09.228501       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:52:09.228639       1 metrics.go:72] Registering metrics
	I1101 10:52:09.228731       1 controller.go:711] "Syncing nftables rules"
	I1101 10:52:19.033001       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:52:19.033109       1 main.go:301] handling current node
	I1101 10:52:29.028355       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:52:29.028426       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a50a1cb7bead61bfb7e51d69a6546b6c6aacf65471189ca372cff9447300b4d] <==
	I1101 10:51:53.757651       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1101 10:51:53.757979       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:51:53.817332       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:51:53.818630       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:51:53.838138       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:51:53.853822       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:51:53.854104       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:51:54.316363       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:51:54.326090       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:51:54.326117       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:51:55.610072       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:51:55.707822       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:51:55.847200       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:51:55.856542       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 10:51:55.857981       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:51:55.864753       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:51:56.376209       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:51:56.946242       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:51:57.068720       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:51:57.115781       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:52:02.181250       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:52:02.282262       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:52:02.305269       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 10:52:02.341743       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1101 10:52:30.483290       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:60998: use of closed network connection
	
	
	==> kube-controller-manager [3ff1717c0a59c50397dab4d5f8889af136c276f18df46752cfcd7f9de12fa6aa] <==
	I1101 10:52:01.405395       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:52:01.405512       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:52:01.405668       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:52:01.405842       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:52:01.406074       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:52:01.406174       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:52:01.385274       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:52:01.407717       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:52:01.417460       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:52:01.417626       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:52:01.419536       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:52:01.422710       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-548708" podCIDRs=["10.244.0.0/24"]
	I1101 10:52:01.422849       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:52:01.422863       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:52:01.422869       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:52:01.426475       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:52:01.426759       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:52:01.439418       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:52:01.439508       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:52:01.439628       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:52:01.439663       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:52:01.441030       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:52:01.441068       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:52:01.448485       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:52:21.662974       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f7ab4a1ab750d645ba3406c404cbc88c186f364e7d1c21b9b7e035ad8cdd1d56] <==
	I1101 10:52:04.697603       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:52:04.791917       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:52:04.896573       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:52:04.896609       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:52:04.896677       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:52:04.935620       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:52:04.935758       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:52:04.943317       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:52:04.943891       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:52:04.943962       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:52:04.949735       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:52:04.952073       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:52:04.950138       1 config.go:200] "Starting service config controller"
	I1101 10:52:04.952196       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:52:04.950449       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:52:04.952254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:52:04.950853       1 config.go:309] "Starting node config controller"
	I1101 10:52:04.952314       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:52:04.952343       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:52:05.053270       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:52:05.053309       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:52:05.053360       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d54f7a3a2db3fbf7f4e4ec3c8ed8d9249b54338c13b6290f8b9eecdac2542ab5] <==
	E1101 10:51:53.794966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:51:53.795001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:51:53.795063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:51:53.801763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:51:53.801825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:51:53.801879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:51:53.801921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:51:53.801970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:51:53.802006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:51:53.802061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:51:53.802112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:51:53.807088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:51:54.628209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:51:54.635413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:51:54.737326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:51:54.778585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:51:54.822530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:51:54.908343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:51:54.941083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:51:54.998516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:51:55.013175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:51:55.041064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:51:55.112708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:51:55.275143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 10:51:58.566048       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:52:02 no-preload-548708 kubelet[1999]: I1101 10:52:02.619145    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/54530b5e-c2c7-4767-8207-d7ecefdc464e-cni-cfg\") pod \"kindnet-mwwlc\" (UID: \"54530b5e-c2c7-4767-8207-d7ecefdc464e\") " pod="kube-system/kindnet-mwwlc"
	Nov 01 10:52:02 no-preload-548708 kubelet[1999]: I1101 10:52:02.619236    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54530b5e-c2c7-4767-8207-d7ecefdc464e-xtables-lock\") pod \"kindnet-mwwlc\" (UID: \"54530b5e-c2c7-4767-8207-d7ecefdc464e\") " pod="kube-system/kindnet-mwwlc"
	Nov 01 10:52:02 no-preload-548708 kubelet[1999]: I1101 10:52:02.619274    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54530b5e-c2c7-4767-8207-d7ecefdc464e-lib-modules\") pod \"kindnet-mwwlc\" (UID: \"54530b5e-c2c7-4767-8207-d7ecefdc464e\") " pod="kube-system/kindnet-mwwlc"
	Nov 01 10:52:02 no-preload-548708 kubelet[1999]: I1101 10:52:02.619299    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5nmn\" (UniqueName: \"kubernetes.io/projected/54530b5e-c2c7-4767-8207-d7ecefdc464e-kube-api-access-h5nmn\") pod \"kindnet-mwwlc\" (UID: \"54530b5e-c2c7-4767-8207-d7ecefdc464e\") " pod="kube-system/kindnet-mwwlc"
	Nov 01 10:52:03 no-preload-548708 kubelet[1999]: E1101 10:52:03.741399    1999 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:52:03 no-preload-548708 kubelet[1999]: E1101 10:52:03.741446    1999 projected.go:196] Error preparing data for projected volume kube-api-access-4hqp5 for pod kube-system/kube-proxy-m7vxc: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:52:03 no-preload-548708 kubelet[1999]: E1101 10:52:03.741530    1999 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/988bbedc-207d-455d-8e07-24e37391bacc-kube-api-access-4hqp5 podName:988bbedc-207d-455d-8e07-24e37391bacc nodeName:}" failed. No retries permitted until 2025-11-01 10:52:04.241506356 +0000 UTC m=+7.452167709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4hqp5" (UniqueName: "kubernetes.io/projected/988bbedc-207d-455d-8e07-24e37391bacc-kube-api-access-4hqp5") pod "kube-proxy-m7vxc" (UID: "988bbedc-207d-455d-8e07-24e37391bacc") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:52:03 no-preload-548708 kubelet[1999]: E1101 10:52:03.819238    1999 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:52:03 no-preload-548708 kubelet[1999]: E1101 10:52:03.819292    1999 projected.go:196] Error preparing data for projected volume kube-api-access-h5nmn for pod kube-system/kindnet-mwwlc: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:52:03 no-preload-548708 kubelet[1999]: E1101 10:52:03.819369    1999 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54530b5e-c2c7-4767-8207-d7ecefdc464e-kube-api-access-h5nmn podName:54530b5e-c2c7-4767-8207-d7ecefdc464e nodeName:}" failed. No retries permitted until 2025-11-01 10:52:04.319348968 +0000 UTC m=+7.530010329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h5nmn" (UniqueName: "kubernetes.io/projected/54530b5e-c2c7-4767-8207-d7ecefdc464e-kube-api-access-h5nmn") pod "kindnet-mwwlc" (UID: "54530b5e-c2c7-4767-8207-d7ecefdc464e") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 10:52:04 no-preload-548708 kubelet[1999]: I1101 10:52:04.247600    1999 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:52:04 no-preload-548708 kubelet[1999]: W1101 10:52:04.615297    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/crio-d7c380b06cc663326955d49e96b2106bbafacb071e7469c838eb0161e6dfd221 WatchSource:0}: Error finding container d7c380b06cc663326955d49e96b2106bbafacb071e7469c838eb0161e6dfd221: Status 404 returned error can't find the container with id d7c380b06cc663326955d49e96b2106bbafacb071e7469c838eb0161e6dfd221
	Nov 01 10:52:05 no-preload-548708 kubelet[1999]: I1101 10:52:05.719046    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m7vxc" podStartSLOduration=3.719024559 podStartE2EDuration="3.719024559s" podCreationTimestamp="2025-11-01 10:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:52:05.145218688 +0000 UTC m=+8.355880041" watchObservedRunningTime="2025-11-01 10:52:05.719024559 +0000 UTC m=+8.929685912"
	Nov 01 10:52:09 no-preload-548708 kubelet[1999]: I1101 10:52:09.147709    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mwwlc" podStartSLOduration=3.113130283 podStartE2EDuration="7.14769312s" podCreationTimestamp="2025-11-01 10:52:02 +0000 UTC" firstStartedPulling="2025-11-01 10:52:04.620581225 +0000 UTC m=+7.831242586" lastFinishedPulling="2025-11-01 10:52:08.655144062 +0000 UTC m=+11.865805423" observedRunningTime="2025-11-01 10:52:09.146378662 +0000 UTC m=+12.357040040" watchObservedRunningTime="2025-11-01 10:52:09.14769312 +0000 UTC m=+12.358354489"
	Nov 01 10:52:19 no-preload-548708 kubelet[1999]: I1101 10:52:19.304335    1999 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:52:19 no-preload-548708 kubelet[1999]: I1101 10:52:19.386005    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlnff\" (UniqueName: \"kubernetes.io/projected/08052f57-4f29-45d0-9176-0e7cd8817cce-kube-api-access-nlnff\") pod \"storage-provisioner\" (UID: \"08052f57-4f29-45d0-9176-0e7cd8817cce\") " pod="kube-system/storage-provisioner"
	Nov 01 10:52:19 no-preload-548708 kubelet[1999]: I1101 10:52:19.386331    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08052f57-4f29-45d0-9176-0e7cd8817cce-tmp\") pod \"storage-provisioner\" (UID: \"08052f57-4f29-45d0-9176-0e7cd8817cce\") " pod="kube-system/storage-provisioner"
	Nov 01 10:52:19 no-preload-548708 kubelet[1999]: I1101 10:52:19.487303    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77th\" (UniqueName: \"kubernetes.io/projected/45a2b863-ccf3-4449-b46c-d5d1ccb4a618-kube-api-access-d77th\") pod \"coredns-66bc5c9577-dt2gw\" (UID: \"45a2b863-ccf3-4449-b46c-d5d1ccb4a618\") " pod="kube-system/coredns-66bc5c9577-dt2gw"
	Nov 01 10:52:19 no-preload-548708 kubelet[1999]: I1101 10:52:19.487513    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45a2b863-ccf3-4449-b46c-d5d1ccb4a618-config-volume\") pod \"coredns-66bc5c9577-dt2gw\" (UID: \"45a2b863-ccf3-4449-b46c-d5d1ccb4a618\") " pod="kube-system/coredns-66bc5c9577-dt2gw"
	Nov 01 10:52:19 no-preload-548708 kubelet[1999]: W1101 10:52:19.779340    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/crio-b3b3f51d0f3b8b0322bfd13b9ea46e9f2719054c018db84d3c85a04476f7745a WatchSource:0}: Error finding container b3b3f51d0f3b8b0322bfd13b9ea46e9f2719054c018db84d3c85a04476f7745a: Status 404 returned error can't find the container with id b3b3f51d0f3b8b0322bfd13b9ea46e9f2719054c018db84d3c85a04476f7745a
	Nov 01 10:52:20 no-preload-548708 kubelet[1999]: I1101 10:52:20.198690    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.1986713 podStartE2EDuration="17.1986713s" podCreationTimestamp="2025-11-01 10:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:52:20.195964485 +0000 UTC m=+23.406625846" watchObservedRunningTime="2025-11-01 10:52:20.1986713 +0000 UTC m=+23.409332710"
	Nov 01 10:52:20 no-preload-548708 kubelet[1999]: I1101 10:52:20.254688    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dt2gw" podStartSLOduration=18.254670319 podStartE2EDuration="18.254670319s" podCreationTimestamp="2025-11-01 10:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:52:20.214617656 +0000 UTC m=+23.425279017" watchObservedRunningTime="2025-11-01 10:52:20.254670319 +0000 UTC m=+23.465331664"
	Nov 01 10:52:22 no-preload-548708 kubelet[1999]: I1101 10:52:22.417575    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z7gd\" (UniqueName: \"kubernetes.io/projected/a013fa5d-50ef-4b04-996a-c6fd9681d728-kube-api-access-5z7gd\") pod \"busybox\" (UID: \"a013fa5d-50ef-4b04-996a-c6fd9681d728\") " pod="default/busybox"
	Nov 01 10:52:22 no-preload-548708 kubelet[1999]: W1101 10:52:22.697176    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/crio-2b2c335e36eb1870301954c89d5713c7af4613878a82df9ff6eabbb072b01d6f WatchSource:0}: Error finding container 2b2c335e36eb1870301954c89d5713c7af4613878a82df9ff6eabbb072b01d6f: Status 404 returned error can't find the container with id 2b2c335e36eb1870301954c89d5713c7af4613878a82df9ff6eabbb072b01d6f
	Nov 01 10:52:30 no-preload-548708 kubelet[1999]: E1101 10:52:30.485101    1999 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:46508->127.0.0.1:36043: read tcp 127.0.0.1:46508->127.0.0.1:36043: read: connection reset by peer
	
	
	==> storage-provisioner [17f1764c003a76892853db0f83bdd66d2b7aef3456b8cd4f4ee6ee480661c1ef] <==
	I1101 10:52:19.781512       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:52:19.854106       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:52:19.854151       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:52:19.857187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:19.867136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:52:19.867363       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:52:19.867552       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-548708_698b762f-6058-4f9c-8bbf-b0d05b6581f2!
	I1101 10:52:19.876118       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5d04e54d-f042-48f8-95f5-aa02f6c4b764", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-548708_698b762f-6058-4f9c-8bbf-b0d05b6581f2 became leader
	W1101 10:52:19.877084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:19.884624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:52:19.974752       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-548708_698b762f-6058-4f9c-8bbf-b0d05b6581f2!
	W1101 10:52:21.891710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:21.898552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:23.901077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:23.905264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:25.909006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:25.914593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:27.918115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:27.924797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:29.936685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:29.988031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:31.991840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:52:31.996837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-548708 -n no-preload-548708
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-548708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-196911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-196911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.480585ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-196911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-196911
helpers_test.go:243: (dbg) docker inspect newest-cni-196911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8",
	        "Created": "2025-11-01T10:51:57.909472706Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496405,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:51:57.975722299Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/hostname",
	        "HostsPath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/hosts",
	        "LogPath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8-json.log",
	        "Name": "/newest-cni-196911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-196911:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-196911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8",
	                "LowerDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-196911",
	                "Source": "/var/lib/docker/volumes/newest-cni-196911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-196911",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-196911",
	                "name.minikube.sigs.k8s.io": "newest-cni-196911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "657f13051523a3c085b0f52338ab5ee9de2a6ab09baebc2cfd44218b319ffee9",
	            "SandboxKey": "/var/run/docker/netns/657f13051523",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-196911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:34:3b:a9:51:df",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e268685f915b61a03d0e4cd44fdcaaee41eecaa2cf061bd3b1cfc552fbc84998",
	                    "EndpointID": "4919a59e6b597d43ce070a8122b4ab253648f1d0a26166fed1a39ce043e0a81e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-196911",
	                        "017ea6857675"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-196911 -n newest-cni-196911
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-196911 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-196911 logs -n 25: (1.108999473s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-308600 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p cert-expiration-308600                                                                                                                                                                                                                     │ cert-expiration-308600       │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-014050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-014050 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-014050 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ start   │ -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ stop    │ -p embed-certs-499088 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable dashboard -p embed-certs-499088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:51 UTC │
	│ image   │ default-k8s-diff-port-014050 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ pause   │ -p default-k8s-diff-port-014050 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p disable-driver-mounts-514829                                                                                                                                                                                                               │ disable-driver-mounts-514829 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:52 UTC │
	│ image   │ embed-certs-499088 image list --format=json                                                                                                                                                                                                   │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ pause   │ -p embed-certs-499088 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-548708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ stop    │ -p no-preload-548708 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-196911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:51:50
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:51:50.776199  495968 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:50.776316  495968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:50.776364  495968 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:50.776369  495968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:50.776620  495968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:51:50.777173  495968 out.go:368] Setting JSON to false
	I1101 10:51:50.783273  495968 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9263,"bootTime":1761985048,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:51:50.783351  495968 start.go:143] virtualization:  
	I1101 10:51:50.789389  495968 out.go:179] * [newest-cni-196911] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:51:50.792985  495968 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:51:50.793022  495968 notify.go:221] Checking for updates...
	I1101 10:51:50.799419  495968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:51:50.802748  495968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:51:50.805970  495968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:51:50.809231  495968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:51:50.812342  495968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:51:50.816151  495968 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:50.816273  495968 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:51:50.872550  495968 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:51:50.872692  495968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:51:51.012423  495968 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:51:50.994243189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:51:51.012561  495968 docker.go:319] overlay module found
	I1101 10:51:51.015919  495968 out.go:179] * Using the docker driver based on user configuration
	I1101 10:51:51.018930  495968 start.go:309] selected driver: docker
	I1101 10:51:51.018956  495968 start.go:930] validating driver "docker" against <nil>
	I1101 10:51:51.018972  495968 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:51:51.019821  495968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:51:51.138839  495968 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:51:51.121925337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:51:51.139049  495968 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 10:51:51.139107  495968 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 10:51:51.141246  495968 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:51:51.144514  495968 out.go:179] * Using Docker driver with root privileges
	I1101 10:51:51.147481  495968 cni.go:84] Creating CNI manager for ""
	I1101 10:51:51.147593  495968 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:51:51.147612  495968 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:51:51.147737  495968 start.go:353] cluster config:
	{Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:51:51.153082  495968 out.go:179] * Starting "newest-cni-196911" primary control-plane node in "newest-cni-196911" cluster
	I1101 10:51:51.156055  495968 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:51:51.159247  495968 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:51:51.162160  495968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:51:51.162210  495968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:51:51.162565  495968 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:51:51.162584  495968 cache.go:59] Caching tarball of preloaded images
	I1101 10:51:51.162692  495968 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:51:51.162707  495968 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:51:51.162857  495968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/config.json ...
	I1101 10:51:51.162889  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/config.json: {Name:mk19e2de0488f12059f0b5c1a3b77ee10ddaa055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:51:51.201585  495968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:51:51.201617  495968 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:51:51.201634  495968 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:51:51.201662  495968 start.go:360] acquireMachinesLock for newest-cni-196911: {Name:mk5d13c3ab821736ff221679ae614a306353c01c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:51:51.201797  495968 start.go:364] duration metric: took 108.94µs to acquireMachinesLock for "newest-cni-196911"
	I1101 10:51:51.201843  495968 start.go:93] Provisioning new machine with config: &{Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:51:51.201927  495968 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:51:51.205414  495968 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:51:51.205684  495968 start.go:159] libmachine.API.Create for "newest-cni-196911" (driver="docker")
	I1101 10:51:51.205767  495968 client.go:173] LocalClient.Create starting
	I1101 10:51:51.205892  495968 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem
	I1101 10:51:51.205953  495968 main.go:143] libmachine: Decoding PEM data...
	I1101 10:51:51.205974  495968 main.go:143] libmachine: Parsing certificate...
	I1101 10:51:51.206050  495968 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem
	I1101 10:51:51.206093  495968 main.go:143] libmachine: Decoding PEM data...
	I1101 10:51:51.206109  495968 main.go:143] libmachine: Parsing certificate...
	I1101 10:51:51.206665  495968 cli_runner.go:164] Run: docker network inspect newest-cni-196911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:51:51.235630  495968 cli_runner.go:211] docker network inspect newest-cni-196911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:51:51.235748  495968 network_create.go:284] running [docker network inspect newest-cni-196911] to gather additional debugging logs...
	I1101 10:51:51.235774  495968 cli_runner.go:164] Run: docker network inspect newest-cni-196911
	W1101 10:51:51.267179  495968 cli_runner.go:211] docker network inspect newest-cni-196911 returned with exit code 1
	I1101 10:51:51.267241  495968 network_create.go:287] error running [docker network inspect newest-cni-196911]: docker network inspect newest-cni-196911: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-196911 not found
	I1101 10:51:51.267255  495968 network_create.go:289] output of [docker network inspect newest-cni-196911]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-196911 not found
	
	** /stderr **
	I1101 10:51:51.267403  495968 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:51:51.319301  495968 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5e2665991a3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:25:1a:f9:12:ec} reservation:<nil>}
	I1101 10:51:51.319827  495968 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-adecbbb769f0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:b0:b5:2e:4c:30} reservation:<nil>}
	I1101 10:51:51.320140  495968 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2077d26d1806 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:49:68:b6:9e:fb} reservation:<nil>}
	I1101 10:51:51.320676  495968 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d91a0}
	I1101 10:51:51.320697  495968 network_create.go:124] attempt to create docker network newest-cni-196911 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 10:51:51.320766  495968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-196911 newest-cni-196911
	I1101 10:51:51.448575  495968 network_create.go:108] docker network newest-cni-196911 192.168.76.0/24 created
	I1101 10:51:51.448614  495968 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-196911" container
	I1101 10:51:51.448719  495968 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:51:51.476625  495968 cli_runner.go:164] Run: docker volume create newest-cni-196911 --label name.minikube.sigs.k8s.io=newest-cni-196911 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:51:51.518979  495968 oci.go:103] Successfully created a docker volume newest-cni-196911
	I1101 10:51:51.519107  495968 cli_runner.go:164] Run: docker run --rm --name newest-cni-196911-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-196911 --entrypoint /usr/bin/test -v newest-cni-196911:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:51:52.299885  495968 oci.go:107] Successfully prepared a docker volume newest-cni-196911
	I1101 10:51:52.299928  495968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:51:52.299962  495968 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:51:52.300033  495968 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-196911:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 10:51:57.519316  491840 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:51:57.519378  491840 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:51:57.519501  491840 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:51:57.519580  491840 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:51:57.519622  491840 kubeadm.go:319] OS: Linux
	I1101 10:51:57.519716  491840 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:51:57.519777  491840 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:51:57.519828  491840 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:51:57.519901  491840 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:51:57.519966  491840 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:51:57.520038  491840 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:51:57.520111  491840 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:51:57.520191  491840 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:51:57.520260  491840 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:51:57.520360  491840 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:51:57.520473  491840 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:51:57.520569  491840 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:51:57.520654  491840 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:51:57.538781  491840 out.go:252]   - Generating certificates and keys ...
	I1101 10:51:57.538961  491840 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:51:57.539040  491840 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:51:57.539126  491840 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:51:57.539196  491840 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:51:57.539273  491840 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:51:57.539339  491840 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:51:57.539404  491840 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:51:57.539611  491840 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-548708] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:51:57.539710  491840 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:51:57.539926  491840 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-548708] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:51:57.540021  491840 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:51:57.540120  491840 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:51:57.540200  491840 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:51:57.540292  491840 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:51:57.540412  491840 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:51:57.540506  491840 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:51:57.540582  491840 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:51:57.540661  491840 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:51:57.540720  491840 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:51:57.540806  491840 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:51:57.540889  491840 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:51:57.572023  491840 out.go:252]   - Booting up control plane ...
	I1101 10:51:57.572191  491840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:51:57.572275  491840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:51:57.572345  491840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:51:57.572453  491840 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:51:57.572551  491840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:51:57.572661  491840 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:51:57.572748  491840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:51:57.572790  491840 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:51:57.572962  491840 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:51:57.573144  491840 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:51:57.573223  491840 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002275161s
	I1101 10:51:57.573342  491840 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:51:57.573431  491840 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 10:51:57.573531  491840 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:51:57.573620  491840 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:51:57.573702  491840 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.597518581s
	I1101 10:51:57.573816  491840 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.724238237s
	I1101 10:51:57.573934  491840 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.002763344s
	I1101 10:51:57.574048  491840 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:51:57.574208  491840 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:51:57.574290  491840 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:51:57.574511  491840 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-548708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:51:57.574577  491840 kubeadm.go:319] [bootstrap-token] Using token: 01jztw.u51d6r2k2lvew2ci
	I1101 10:51:57.635197  491840 out.go:252]   - Configuring RBAC rules ...
	I1101 10:51:57.635335  491840 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:51:57.635451  491840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:51:57.635618  491840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:51:57.635762  491840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:51:57.635902  491840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:51:57.635999  491840 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:51:57.636123  491840 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:51:57.636173  491840 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:51:57.636225  491840 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:51:57.636233  491840 kubeadm.go:319] 
	I1101 10:51:57.636296  491840 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:51:57.636306  491840 kubeadm.go:319] 
	I1101 10:51:57.636387  491840 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:51:57.636396  491840 kubeadm.go:319] 
	I1101 10:51:57.636423  491840 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:51:57.636488  491840 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:51:57.636558  491840 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:51:57.636567  491840 kubeadm.go:319] 
	I1101 10:51:57.636624  491840 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:51:57.636628  491840 kubeadm.go:319] 
	I1101 10:51:57.636678  491840 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:51:57.636682  491840 kubeadm.go:319] 
	I1101 10:51:57.636736  491840 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:51:57.636821  491840 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:51:57.636892  491840 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:51:57.636897  491840 kubeadm.go:319] 
	I1101 10:51:57.637039  491840 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:51:57.637122  491840 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:51:57.637133  491840 kubeadm.go:319] 
	I1101 10:51:57.637221  491840 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 01jztw.u51d6r2k2lvew2ci \
	I1101 10:51:57.637333  491840 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 \
	I1101 10:51:57.637359  491840 kubeadm.go:319] 	--control-plane 
	I1101 10:51:57.637367  491840 kubeadm.go:319] 
	I1101 10:51:57.637456  491840 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:51:57.637464  491840 kubeadm.go:319] 
	I1101 10:51:57.637550  491840 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 01jztw.u51d6r2k2lvew2ci \
	I1101 10:51:57.637682  491840 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 
	I1101 10:51:57.637695  491840 cni.go:84] Creating CNI manager for ""
	I1101 10:51:57.637703  491840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:51:57.667830  491840 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:51:57.770457  495968 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-196911:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.470362219s)
	I1101 10:51:57.770483  495968 kic.go:203] duration metric: took 5.470518644s to extract preloaded images to volume ...
	W1101 10:51:57.770607  495968 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:51:57.770723  495968 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:51:57.884428  495968 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-196911 --name newest-cni-196911 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-196911 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-196911 --network newest-cni-196911 --ip 192.168.76.2 --volume newest-cni-196911:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:51:58.237091  495968 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Running}}
	I1101 10:51:58.259891  495968 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:51:58.285698  495968 cli_runner.go:164] Run: docker exec newest-cni-196911 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:51:58.356198  495968 oci.go:144] the created container "newest-cni-196911" has a running status.
	I1101 10:51:58.356226  495968 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa...
	I1101 10:51:59.016028  495968 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:51:59.045678  495968 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:51:59.063752  495968 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:51:59.063777  495968 kic_runner.go:114] Args: [docker exec --privileged newest-cni-196911 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:51:59.137075  495968 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:51:59.165974  495968 machine.go:94] provisionDockerMachine start ...
	I1101 10:51:59.166072  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:51:59.202420  495968 main.go:143] libmachine: Using SSH client type: native
	I1101 10:51:59.202753  495968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1101 10:51:59.202770  495968 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:51:59.205725  495968 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:51:57.700690  491840 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:51:57.706446  491840 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:51:57.706467  491840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:51:57.726498  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:51:58.276260  491840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:51:58.276388  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:51:58.276455  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-548708 minikube.k8s.io/updated_at=2025_11_01T10_51_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=no-preload-548708 minikube.k8s.io/primary=true
	I1101 10:51:58.819613  491840 ops.go:34] apiserver oom_adj: -16
	I1101 10:51:58.819756  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:51:59.320685  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:51:59.819810  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:00.323929  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:00.820313  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:01.320305  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:01.819829  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:02.320373  491840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:02.547886  491840 kubeadm.go:1114] duration metric: took 4.271542799s to wait for elevateKubeSystemPrivileges
	I1101 10:52:02.547922  491840 kubeadm.go:403] duration metric: took 25.645173887s to StartCluster
	I1101 10:52:02.547942  491840 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:02.548001  491840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:02.548679  491840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:02.548905  491840 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:52:02.549036  491840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:52:02.549277  491840 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:02.549318  491840 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:52:02.549382  491840 addons.go:70] Setting storage-provisioner=true in profile "no-preload-548708"
	I1101 10:52:02.549397  491840 addons.go:239] Setting addon storage-provisioner=true in "no-preload-548708"
	I1101 10:52:02.549421  491840 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:52:02.550010  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:02.550434  491840 addons.go:70] Setting default-storageclass=true in profile "no-preload-548708"
	I1101 10:52:02.550458  491840 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-548708"
	I1101 10:52:02.550830  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:02.554062  491840 out.go:179] * Verifying Kubernetes components...
	I1101 10:52:02.558515  491840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:02.583894  491840 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:52:02.591041  491840 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:02.591069  491840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:52:02.591136  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:02.602314  491840 addons.go:239] Setting addon default-storageclass=true in "no-preload-548708"
	I1101 10:52:02.602359  491840 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:52:02.602773  491840 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:02.630722  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:02.644724  491840 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:02.644748  491840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:52:02.644814  491840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:02.678780  491840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:02.954535  491840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:52:03.072917  491840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:03.099889  491840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:03.113777  491840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:03.570608  491840 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:52:03.572313  491840 node_ready.go:35] waiting up to 6m0s for node "no-preload-548708" to be "Ready" ...
	I1101 10:52:03.957452  491840 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:52:02.389001  495968 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-196911
	
	I1101 10:52:02.389025  495968 ubuntu.go:182] provisioning hostname "newest-cni-196911"
	I1101 10:52:02.389090  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:02.417318  495968 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:02.417675  495968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1101 10:52:02.417696  495968 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-196911 && echo "newest-cni-196911" | sudo tee /etc/hostname
	I1101 10:52:02.652362  495968 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-196911
	
	I1101 10:52:02.652444  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:02.696979  495968 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:02.697287  495968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1101 10:52:02.697308  495968 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-196911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-196911/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-196911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:52:02.894836  495968 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:52:02.894867  495968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:52:02.894885  495968 ubuntu.go:190] setting up certificates
	I1101 10:52:02.894895  495968 provision.go:84] configureAuth start
	I1101 10:52:02.894972  495968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-196911
	I1101 10:52:02.937145  495968 provision.go:143] copyHostCerts
	I1101 10:52:02.937209  495968 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:52:02.937218  495968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:52:02.937298  495968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:52:02.937395  495968 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:52:02.937400  495968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:52:02.937426  495968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:52:02.937521  495968 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:52:02.937525  495968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:52:02.937549  495968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:52:02.937606  495968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.newest-cni-196911 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-196911]
	I1101 10:52:03.262926  495968 provision.go:177] copyRemoteCerts
	I1101 10:52:03.263043  495968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:52:03.263128  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:03.281438  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:03.411099  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:52:03.435146  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:52:03.463925  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:52:03.492293  495968 provision.go:87] duration metric: took 597.375789ms to configureAuth
	I1101 10:52:03.492368  495968 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:52:03.492610  495968 config.go:182] Loaded profile config "newest-cni-196911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:03.492777  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:03.525209  495968 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:03.525521  495968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1101 10:52:03.525536  495968 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:52:03.893449  495968 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:52:03.893473  495968 machine.go:97] duration metric: took 4.727479027s to provisionDockerMachine
	I1101 10:52:03.893483  495968 client.go:176] duration metric: took 12.6876879s to LocalClient.Create
	I1101 10:52:03.893511  495968 start.go:167] duration metric: took 12.687814614s to libmachine.API.Create "newest-cni-196911"
	I1101 10:52:03.893522  495968 start.go:293] postStartSetup for "newest-cni-196911" (driver="docker")
	I1101 10:52:03.893533  495968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:52:03.893604  495968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:52:03.893655  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:03.919500  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:04.033860  495968 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:52:04.037450  495968 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:52:04.037483  495968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:52:04.037496  495968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:52:04.037554  495968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:52:04.037639  495968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:52:04.037765  495968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:52:04.045716  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:52:04.064587  495968 start.go:296] duration metric: took 171.049641ms for postStartSetup
	I1101 10:52:04.065041  495968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-196911
	I1101 10:52:04.088128  495968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/config.json ...
	I1101 10:52:04.088498  495968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:52:04.088543  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:04.105559  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:04.206271  495968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:52:04.211086  495968 start.go:128] duration metric: took 13.009144388s to createHost
	I1101 10:52:04.211111  495968 start.go:83] releasing machines lock for "newest-cni-196911", held for 13.009300222s
	I1101 10:52:04.211181  495968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-196911
	I1101 10:52:04.228598  495968 ssh_runner.go:195] Run: cat /version.json
	I1101 10:52:04.228630  495968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:52:04.228653  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:04.228693  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:04.253488  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:04.262526  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:04.469261  495968 ssh_runner.go:195] Run: systemctl --version
	I1101 10:52:04.476285  495968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:52:04.512514  495968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:52:04.517187  495968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:52:04.517263  495968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:52:04.562482  495968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:52:04.562520  495968 start.go:496] detecting cgroup driver to use...
	I1101 10:52:04.562571  495968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:52:04.562649  495968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:52:04.591652  495968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:52:04.614618  495968 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:52:04.614703  495968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:52:04.646460  495968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:52:04.681199  495968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:52:04.833957  495968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:52:05.017048  495968 docker.go:234] disabling docker service ...
	I1101 10:52:05.017173  495968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:52:05.049815  495968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:52:05.079377  495968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:52:05.309674  495968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:52:05.523785  495968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:52:05.547030  495968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:52:05.586845  495968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:52:05.586966  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.599849  495968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:52:05.599996  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.618411  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.631973  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.653570  495968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:52:05.678760  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.690164  495968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.710104  495968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:05.738921  495968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:52:05.753444  495968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:52:05.762788  495968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:03.962834  491840 addons.go:515] duration metric: took 1.413482327s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:52:04.075093  491840 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-548708" context rescaled to 1 replicas
	W1101 10:52:05.579505  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	I1101 10:52:05.918163  495968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:52:06.124441  495968 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:52:06.124563  495968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:52:06.133621  495968 start.go:564] Will wait 60s for crictl version
	I1101 10:52:06.133739  495968 ssh_runner.go:195] Run: which crictl
	I1101 10:52:06.140673  495968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:52:06.172612  495968 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:52:06.172735  495968 ssh_runner.go:195] Run: crio --version
	I1101 10:52:06.215098  495968 ssh_runner.go:195] Run: crio --version
	I1101 10:52:06.261153  495968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:52:06.264423  495968 cli_runner.go:164] Run: docker network inspect newest-cni-196911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:52:06.285653  495968 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:52:06.289899  495968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:52:06.305875  495968 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:52:06.308853  495968 kubeadm.go:884] updating cluster {Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:52:06.309003  495968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:52:06.309094  495968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:52:06.345085  495968 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:52:06.345110  495968 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:52:06.345166  495968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:52:06.376173  495968 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:52:06.376198  495968 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:52:06.376207  495968 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:52:06.376408  495968 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-196911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:52:06.376517  495968 ssh_runner.go:195] Run: crio config
	I1101 10:52:06.466187  495968 cni.go:84] Creating CNI manager for ""
	I1101 10:52:06.466213  495968 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:52:06.466225  495968 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:52:06.466257  495968 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-196911 NodeName:newest-cni-196911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:52:06.466391  495968 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-196911"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:52:06.466467  495968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:52:06.476156  495968 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:52:06.476257  495968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:52:06.484902  495968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:52:06.498196  495968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:52:06.519280  495968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 10:52:06.542847  495968 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:52:06.547177  495968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:52:06.564426  495968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:06.751042  495968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:06.775380  495968 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911 for IP: 192.168.76.2
	I1101 10:52:06.775402  495968 certs.go:195] generating shared ca certs ...
	I1101 10:52:06.775419  495968 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:06.775628  495968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:52:06.775716  495968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:52:06.775744  495968 certs.go:257] generating profile certs ...
	I1101 10:52:06.775832  495968 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.key
	I1101 10:52:06.775852  495968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.crt with IP's: []
	I1101 10:52:07.134538  495968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.crt ...
	I1101 10:52:07.134569  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.crt: {Name:mkfabc42af4f9288372d5f946b09cb224920816d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:07.134819  495968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.key ...
	I1101 10:52:07.134837  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.key: {Name:mkcec546c03779944f6e824473ded36c36323270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:07.134964  495968 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key.415499af
	I1101 10:52:07.134987  495968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt.415499af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:52:07.547359  495968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt.415499af ...
	I1101 10:52:07.547401  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt.415499af: {Name:mkf434af126169d8ca18f549cfc9c7b8a5cd4e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:07.547616  495968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key.415499af ...
	I1101 10:52:07.547636  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key.415499af: {Name:mk597bebd4b24bb541a874864b7c2181f5bfc86e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:07.547772  495968 certs.go:382] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt.415499af -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt
	I1101 10:52:07.547899  495968 certs.go:386] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key.415499af -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key
	I1101 10:52:07.547993  495968 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key
	I1101 10:52:07.548037  495968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.crt with IP's: []
	I1101 10:52:08.150909  495968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.crt ...
	I1101 10:52:08.150980  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.crt: {Name:mk5205b90e0005522b102a14c674d86f0e990463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:08.151169  495968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key ...
	I1101 10:52:08.151206  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key: {Name:mk3e386bbb001ff416ace78c17a97792f552575a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:08.151439  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:52:08.151508  495968 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:52:08.151534  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:52:08.151597  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:52:08.151652  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:52:08.151709  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:52:08.151796  495968 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:52:08.152464  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:52:08.170688  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:52:08.189233  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:52:08.207783  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:52:08.225581  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:52:08.242702  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:52:08.260144  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:52:08.277781  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:52:08.296874  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:52:08.316098  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:52:08.335705  495968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:52:08.355793  495968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:52:08.369871  495968 ssh_runner.go:195] Run: openssl version
	I1101 10:52:08.376256  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:52:08.385794  495968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:08.390075  495968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:08.390229  495968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:08.449780  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:52:08.459273  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:52:08.468852  495968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:52:08.479866  495968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:52:08.479979  495968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:52:08.534284  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:52:08.544740  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:52:08.554393  495968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:52:08.563254  495968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:52:08.563323  495968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:52:08.605524  495968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:52:08.615048  495968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:52:08.618818  495968 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:52:08.618871  495968 kubeadm.go:401] StartCluster: {Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:52:08.618959  495968 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:52:08.619022  495968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:52:08.664453  495968 cri.go:89] found id: ""
	I1101 10:52:08.664551  495968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:52:08.675189  495968 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:52:08.683591  495968 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:52:08.683656  495968 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:52:08.694436  495968 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:52:08.694456  495968 kubeadm.go:158] found existing configuration files:
	
	I1101 10:52:08.694505  495968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:52:08.703756  495968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:52:08.703820  495968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:52:08.712389  495968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:52:08.726171  495968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:52:08.726244  495968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:52:08.736827  495968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:52:08.746467  495968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:52:08.746536  495968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:52:08.754370  495968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:52:08.763269  495968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:52:08.763338  495968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:52:08.771113  495968 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:52:08.818854  495968 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:52:08.819120  495968 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:52:08.849605  495968 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:52:08.849720  495968 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:52:08.849764  495968 kubeadm.go:319] OS: Linux
	I1101 10:52:08.849814  495968 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:52:08.849868  495968 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:52:08.849921  495968 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:52:08.849975  495968 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:52:08.850031  495968 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:52:08.850086  495968 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:52:08.850138  495968 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:52:08.850193  495968 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:52:08.850249  495968 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:52:08.929712  495968 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:52:08.929831  495968 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:52:08.929933  495968 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:52:08.944401  495968 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:52:08.949801  495968 out.go:252]   - Generating certificates and keys ...
	I1101 10:52:08.949902  495968 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:52:08.949979  495968 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:52:09.510678  495968 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1101 10:52:08.075848  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	W1101 10:52:10.077364  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	I1101 10:52:10.903281  495968 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:52:11.227686  495968 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:52:12.054481  495968 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:52:13.747095  495968 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:52:13.747520  495968 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-196911] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:52:13.850513  495968 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:52:13.850939  495968 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-196911] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:52:13.922666  495968 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:52:14.259895  495968 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:52:14.657499  495968 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:52:14.657582  495968 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:52:15.336979  495968 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	W1101 10:52:12.576695  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	W1101 10:52:15.078037  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	I1101 10:52:16.377162  495968 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:52:16.919110  495968 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:52:17.956801  495968 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:52:18.427370  495968 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:52:18.428150  495968 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:52:18.430923  495968 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:52:18.434401  495968 out.go:252]   - Booting up control plane ...
	I1101 10:52:18.434533  495968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:52:18.434624  495968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:52:18.436272  495968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:52:18.458037  495968 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:52:18.458150  495968 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:52:18.466539  495968 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:52:18.466846  495968 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:52:18.467102  495968 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:52:18.611137  495968 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:52:18.611269  495968 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:52:20.612541  495968 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001696s
	I1101 10:52:20.616103  495968 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:52:20.616209  495968 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 10:52:20.616540  495968 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:52:20.616631  495968 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 10:52:17.577319  491840 node_ready.go:57] node "no-preload-548708" has "Ready":"False" status (will retry)
	I1101 10:52:19.577070  491840 node_ready.go:49] node "no-preload-548708" is "Ready"
	I1101 10:52:19.577114  491840 node_ready.go:38] duration metric: took 16.004759435s for node "no-preload-548708" to be "Ready" ...
	I1101 10:52:19.577142  491840 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:52:19.577229  491840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:52:19.611704  491840 api_server.go:72] duration metric: took 17.062737564s to wait for apiserver process to appear ...
	I1101 10:52:19.611739  491840 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:52:19.611778  491840 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:52:19.629454  491840 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:52:19.631018  491840 api_server.go:141] control plane version: v1.34.1
	I1101 10:52:19.631057  491840 api_server.go:131] duration metric: took 19.304442ms to wait for apiserver health ...
	I1101 10:52:19.631066  491840 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:52:19.636550  491840 system_pods.go:59] 8 kube-system pods found
	I1101 10:52:19.636661  491840 system_pods.go:61] "coredns-66bc5c9577-dt2gw" [45a2b863-ccf3-4449-b46c-d5d1ccb4a618] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:52:19.636697  491840 system_pods.go:61] "etcd-no-preload-548708" [9d24db53-83d3-4bc3-98f6-f2b64efdb17e] Running
	I1101 10:52:19.636778  491840 system_pods.go:61] "kindnet-mwwlc" [54530b5e-c2c7-4767-8207-d7ecefdc464e] Running
	I1101 10:52:19.636820  491840 system_pods.go:61] "kube-apiserver-no-preload-548708" [bd3bb490-a0ec-42fe-92dc-fd9d35ae09d6] Running
	I1101 10:52:19.636845  491840 system_pods.go:61] "kube-controller-manager-no-preload-548708" [50018a7e-c81d-4280-85db-47d13f403fa5] Running
	I1101 10:52:19.636877  491840 system_pods.go:61] "kube-proxy-m7vxc" [988bbedc-207d-455d-8e07-24e37391bacc] Running
	I1101 10:52:19.636960  491840 system_pods.go:61] "kube-scheduler-no-preload-548708" [e9354f94-f9ab-4e1b-b640-dd5bffc51024] Running
	I1101 10:52:19.637012  491840 system_pods.go:61] "storage-provisioner" [08052f57-4f29-45d0-9176-0e7cd8817cce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:52:19.637039  491840 system_pods.go:74] duration metric: took 5.966127ms to wait for pod list to return data ...
	I1101 10:52:19.637105  491840 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:52:19.642271  491840 default_sa.go:45] found service account: "default"
	I1101 10:52:19.642360  491840 default_sa.go:55] duration metric: took 5.220479ms for default service account to be created ...
	I1101 10:52:19.642401  491840 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:52:19.650276  491840 system_pods.go:86] 8 kube-system pods found
	I1101 10:52:19.650384  491840 system_pods.go:89] "coredns-66bc5c9577-dt2gw" [45a2b863-ccf3-4449-b46c-d5d1ccb4a618] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:52:19.650408  491840 system_pods.go:89] "etcd-no-preload-548708" [9d24db53-83d3-4bc3-98f6-f2b64efdb17e] Running
	I1101 10:52:19.650451  491840 system_pods.go:89] "kindnet-mwwlc" [54530b5e-c2c7-4767-8207-d7ecefdc464e] Running
	I1101 10:52:19.650477  491840 system_pods.go:89] "kube-apiserver-no-preload-548708" [bd3bb490-a0ec-42fe-92dc-fd9d35ae09d6] Running
	I1101 10:52:19.650521  491840 system_pods.go:89] "kube-controller-manager-no-preload-548708" [50018a7e-c81d-4280-85db-47d13f403fa5] Running
	I1101 10:52:19.650548  491840 system_pods.go:89] "kube-proxy-m7vxc" [988bbedc-207d-455d-8e07-24e37391bacc] Running
	I1101 10:52:19.650568  491840 system_pods.go:89] "kube-scheduler-no-preload-548708" [e9354f94-f9ab-4e1b-b640-dd5bffc51024] Running
	I1101 10:52:19.650602  491840 system_pods.go:89] "storage-provisioner" [08052f57-4f29-45d0-9176-0e7cd8817cce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:52:19.650656  491840 retry.go:31] will retry after 283.836728ms: missing components: kube-dns
	I1101 10:52:19.943183  491840 system_pods.go:86] 8 kube-system pods found
	I1101 10:52:19.943267  491840 system_pods.go:89] "coredns-66bc5c9577-dt2gw" [45a2b863-ccf3-4449-b46c-d5d1ccb4a618] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:52:19.943291  491840 system_pods.go:89] "etcd-no-preload-548708" [9d24db53-83d3-4bc3-98f6-f2b64efdb17e] Running
	I1101 10:52:19.943330  491840 system_pods.go:89] "kindnet-mwwlc" [54530b5e-c2c7-4767-8207-d7ecefdc464e] Running
	I1101 10:52:19.943353  491840 system_pods.go:89] "kube-apiserver-no-preload-548708" [bd3bb490-a0ec-42fe-92dc-fd9d35ae09d6] Running
	I1101 10:52:19.943375  491840 system_pods.go:89] "kube-controller-manager-no-preload-548708" [50018a7e-c81d-4280-85db-47d13f403fa5] Running
	I1101 10:52:19.943410  491840 system_pods.go:89] "kube-proxy-m7vxc" [988bbedc-207d-455d-8e07-24e37391bacc] Running
	I1101 10:52:19.943432  491840 system_pods.go:89] "kube-scheduler-no-preload-548708" [e9354f94-f9ab-4e1b-b640-dd5bffc51024] Running
	I1101 10:52:19.943453  491840 system_pods.go:89] "storage-provisioner" [08052f57-4f29-45d0-9176-0e7cd8817cce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:52:19.943502  491840 retry.go:31] will retry after 352.86041ms: missing components: kube-dns
	I1101 10:52:20.303087  491840 system_pods.go:86] 8 kube-system pods found
	I1101 10:52:20.303169  491840 system_pods.go:89] "coredns-66bc5c9577-dt2gw" [45a2b863-ccf3-4449-b46c-d5d1ccb4a618] Running
	I1101 10:52:20.303192  491840 system_pods.go:89] "etcd-no-preload-548708" [9d24db53-83d3-4bc3-98f6-f2b64efdb17e] Running
	I1101 10:52:20.303239  491840 system_pods.go:89] "kindnet-mwwlc" [54530b5e-c2c7-4767-8207-d7ecefdc464e] Running
	I1101 10:52:20.303290  491840 system_pods.go:89] "kube-apiserver-no-preload-548708" [bd3bb490-a0ec-42fe-92dc-fd9d35ae09d6] Running
	I1101 10:52:20.303327  491840 system_pods.go:89] "kube-controller-manager-no-preload-548708" [50018a7e-c81d-4280-85db-47d13f403fa5] Running
	I1101 10:52:20.303352  491840 system_pods.go:89] "kube-proxy-m7vxc" [988bbedc-207d-455d-8e07-24e37391bacc] Running
	I1101 10:52:20.303374  491840 system_pods.go:89] "kube-scheduler-no-preload-548708" [e9354f94-f9ab-4e1b-b640-dd5bffc51024] Running
	I1101 10:52:20.303410  491840 system_pods.go:89] "storage-provisioner" [08052f57-4f29-45d0-9176-0e7cd8817cce] Running
	I1101 10:52:20.303446  491840 system_pods.go:126] duration metric: took 661.00708ms to wait for k8s-apps to be running ...
	I1101 10:52:20.303484  491840 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:52:20.303571  491840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:52:20.320119  491840 system_svc.go:56] duration metric: took 16.62669ms WaitForService to wait for kubelet
	I1101 10:52:20.320196  491840 kubeadm.go:587] duration metric: took 17.77123938s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:52:20.320233  491840 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:52:20.332056  491840 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:52:20.332139  491840 node_conditions.go:123] node cpu capacity is 2
	I1101 10:52:20.332175  491840 node_conditions.go:105] duration metric: took 11.895963ms to run NodePressure ...
	I1101 10:52:20.332219  491840 start.go:242] waiting for startup goroutines ...
	I1101 10:52:20.332245  491840 start.go:247] waiting for cluster config update ...
	I1101 10:52:20.332272  491840 start.go:256] writing updated cluster config ...
	I1101 10:52:20.332624  491840 ssh_runner.go:195] Run: rm -f paused
	I1101 10:52:20.337340  491840 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:52:20.341528  491840 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dt2gw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.346802  491840 pod_ready.go:94] pod "coredns-66bc5c9577-dt2gw" is "Ready"
	I1101 10:52:20.346876  491840 pod_ready.go:86] duration metric: took 5.270934ms for pod "coredns-66bc5c9577-dt2gw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.355391  491840 pod_ready.go:83] waiting for pod "etcd-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.361793  491840 pod_ready.go:94] pod "etcd-no-preload-548708" is "Ready"
	I1101 10:52:20.361871  491840 pod_ready.go:86] duration metric: took 6.401414ms for pod "etcd-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.367890  491840 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.374877  491840 pod_ready.go:94] pod "kube-apiserver-no-preload-548708" is "Ready"
	I1101 10:52:20.374954  491840 pod_ready.go:86] duration metric: took 6.999245ms for pod "kube-apiserver-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.377728  491840 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.741274  491840 pod_ready.go:94] pod "kube-controller-manager-no-preload-548708" is "Ready"
	I1101 10:52:20.741302  491840 pod_ready.go:86] duration metric: took 363.510866ms for pod "kube-controller-manager-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:20.941605  491840 pod_ready.go:83] waiting for pod "kube-proxy-m7vxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:21.341892  491840 pod_ready.go:94] pod "kube-proxy-m7vxc" is "Ready"
	I1101 10:52:21.341922  491840 pod_ready.go:86] duration metric: took 400.290548ms for pod "kube-proxy-m7vxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:21.542355  491840 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:21.941802  491840 pod_ready.go:94] pod "kube-scheduler-no-preload-548708" is "Ready"
	I1101 10:52:21.941831  491840 pod_ready.go:86] duration metric: took 399.447529ms for pod "kube-scheduler-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:52:21.941852  491840 pod_ready.go:40] duration metric: took 1.604433959s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:52:22.037462  491840 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:52:22.040530  491840 out.go:179] * Done! kubectl is now configured to use "no-preload-548708" cluster and "default" namespace by default
	I1101 10:52:23.563664  495968 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.947167852s
	I1101 10:52:25.614431  495968 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.998286806s
	I1101 10:52:27.618900  495968 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002681771s
	I1101 10:52:27.641901  495968 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:52:27.661471  495968 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:52:27.682898  495968 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:52:27.683449  495968 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-196911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:52:27.697650  495968 kubeadm.go:319] [bootstrap-token] Using token: y8j90m.7prm1r2gb3vjle91
	I1101 10:52:27.700549  495968 out.go:252]   - Configuring RBAC rules ...
	I1101 10:52:27.700681  495968 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:52:27.707377  495968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:52:27.718608  495968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:52:27.724501  495968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:52:27.729434  495968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:52:27.749403  495968 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:52:28.030614  495968 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:52:28.483735  495968 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:52:29.030804  495968 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:52:29.031819  495968 kubeadm.go:319] 
	I1101 10:52:29.031907  495968 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:52:29.031919  495968 kubeadm.go:319] 
	I1101 10:52:29.032007  495968 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:52:29.032015  495968 kubeadm.go:319] 
	I1101 10:52:29.032043  495968 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:52:29.032117  495968 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:52:29.032170  495968 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:52:29.032175  495968 kubeadm.go:319] 
	I1101 10:52:29.032231  495968 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:52:29.032236  495968 kubeadm.go:319] 
	I1101 10:52:29.032285  495968 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:52:29.032290  495968 kubeadm.go:319] 
	I1101 10:52:29.032351  495968 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:52:29.032430  495968 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:52:29.032501  495968 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:52:29.032506  495968 kubeadm.go:319] 
	I1101 10:52:29.032593  495968 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:52:29.032673  495968 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:52:29.032678  495968 kubeadm.go:319] 
	I1101 10:52:29.032771  495968 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token y8j90m.7prm1r2gb3vjle91 \
	I1101 10:52:29.032880  495968 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 \
	I1101 10:52:29.032902  495968 kubeadm.go:319] 	--control-plane 
	I1101 10:52:29.032906  495968 kubeadm.go:319] 
	I1101 10:52:29.033023  495968 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:52:29.033030  495968 kubeadm.go:319] 
	I1101 10:52:29.033115  495968 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token y8j90m.7prm1r2gb3vjle91 \
	I1101 10:52:29.033523  495968 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 
	I1101 10:52:29.037725  495968 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:52:29.038011  495968 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:52:29.038159  495968 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:52:29.038196  495968 cni.go:84] Creating CNI manager for ""
	I1101 10:52:29.038206  495968 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:52:29.043219  495968 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:52:29.046107  495968 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:52:29.050250  495968 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:52:29.050275  495968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:52:29.063986  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:52:29.378507  495968 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:52:29.378652  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:29.378728  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-196911 minikube.k8s.io/updated_at=2025_11_01T10_52_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=newest-cni-196911 minikube.k8s.io/primary=true
	I1101 10:52:29.512585  495968 ops.go:34] apiserver oom_adj: -16
	I1101 10:52:29.512697  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:30.016102  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:30.513213  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:31.013580  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:31.513414  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:32.012789  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:32.513786  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:33.013630  495968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:52:33.227426  495968 kubeadm.go:1114] duration metric: took 3.848819625s to wait for elevateKubeSystemPrivileges
	I1101 10:52:33.227456  495968 kubeadm.go:403] duration metric: took 24.608590771s to StartCluster
	I1101 10:52:33.227475  495968 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:33.227550  495968 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:33.228480  495968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:33.228702  495968 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:52:33.228793  495968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:52:33.229061  495968 config.go:182] Loaded profile config "newest-cni-196911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:33.229171  495968 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:52:33.229237  495968 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-196911"
	I1101 10:52:33.229252  495968 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-196911"
	I1101 10:52:33.229274  495968 host.go:66] Checking if "newest-cni-196911" exists ...
	I1101 10:52:33.229786  495968 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:33.233206  495968 out.go:179] * Verifying Kubernetes components...
	I1101 10:52:33.241119  495968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:33.245022  495968 addons.go:70] Setting default-storageclass=true in profile "newest-cni-196911"
	I1101 10:52:33.245063  495968 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-196911"
	I1101 10:52:33.245390  495968 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:33.302463  495968 addons.go:239] Setting addon default-storageclass=true in "newest-cni-196911"
	I1101 10:52:33.305273  495968 host.go:66] Checking if "newest-cni-196911" exists ...
	I1101 10:52:33.305778  495968 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:33.314295  495968 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:52:33.319287  495968 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:33.319313  495968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:52:33.319385  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:33.377276  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:33.378878  495968 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:33.378898  495968 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:52:33.378965  495968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:33.445088  495968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:33.827307  495968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:33.935558  495968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:52:33.935698  495968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:33.976843  495968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:34.864129  495968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.036790991s)
	I1101 10:52:34.865063  495968 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 10:52:34.867257  495968 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:52:34.867323  495968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:52:34.905375  495968 api_server.go:72] duration metric: took 1.676642327s to wait for apiserver process to appear ...
	I1101 10:52:34.905436  495968 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:52:34.905466  495968 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:52:34.939818  495968 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:52:34.941378  495968 api_server.go:141] control plane version: v1.34.1
	I1101 10:52:34.941435  495968 api_server.go:131] duration metric: took 35.977589ms to wait for apiserver health ...
	I1101 10:52:34.941458  495968 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:52:34.943941  495968 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:52:34.948391  495968 addons.go:515] duration metric: took 1.719202624s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:52:34.961333  495968 system_pods.go:59] 9 kube-system pods found
	I1101 10:52:34.961446  495968 system_pods.go:61] "coredns-66bc5c9577-7hppd" [17dc08d9-3958-4026-95ef-6312d1a13c8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:52:34.961482  495968 system_pods.go:61] "coredns-66bc5c9577-nrbdx" [40aa9ab2-b153-44dd-8fd8-67a26277b297] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:52:34.961508  495968 system_pods.go:61] "etcd-newest-cni-196911" [42f247b8-6ece-44a9-93cd-beb285466fe5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:52:34.961530  495968 system_pods.go:61] "kindnet-mlxls" [0d6d41c4-8fef-48d4-ab11-4f2c76c278e6] Running
	I1101 10:52:34.961562  495968 system_pods.go:61] "kube-apiserver-newest-cni-196911" [140c210e-a29a-4e71-932d-8133da9b074f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:52:34.961589  495968 system_pods.go:61] "kube-controller-manager-newest-cni-196911" [538b5879-e897-4ab3-950e-1317c7dad7e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:52:34.961612  495968 system_pods.go:61] "kube-proxy-2psfb" [fc92af6a-7726-496b-8f2c-e315e3065bf2] Running
	I1101 10:52:34.961644  495968 system_pods.go:61] "kube-scheduler-newest-cni-196911" [e5db2498-da4d-4ca5-b16a-3f78ee27f34c] Running
	I1101 10:52:34.961672  495968 system_pods.go:61] "storage-provisioner" [3987f872-17e6-466b-b60e-1e931276699e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:52:34.961693  495968 system_pods.go:74] duration metric: took 20.215007ms to wait for pod list to return data ...
	I1101 10:52:34.961713  495968 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:52:34.978194  495968 default_sa.go:45] found service account: "default"
	I1101 10:52:34.978221  495968 default_sa.go:55] duration metric: took 16.486168ms for default service account to be created ...
	I1101 10:52:34.978234  495968 kubeadm.go:587] duration metric: took 1.749505741s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:52:34.978250  495968 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:52:34.991936  495968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:52:34.992017  495968 node_conditions.go:123] node cpu capacity is 2
	I1101 10:52:34.992046  495968 node_conditions.go:105] duration metric: took 13.788879ms to run NodePressure ...
	I1101 10:52:34.992070  495968 start.go:242] waiting for startup goroutines ...
	I1101 10:52:35.368994  495968 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-196911" context rescaled to 1 replicas
	I1101 10:52:35.369032  495968 start.go:247] waiting for cluster config update ...
	I1101 10:52:35.369045  495968 start.go:256] writing updated cluster config ...
	I1101 10:52:35.369341  495968 ssh_runner.go:195] Run: rm -f paused
	I1101 10:52:35.429566  495968 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:52:35.432982  495968 out.go:179] * Done! kubectl is now configured to use "newest-cni-196911" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.342347283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.348973462Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=da4aed46-b611-4ae9-be59-9b94b561ee14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.355542778Z" level=info msg="Ran pod sandbox 04063beea8f30a38d5f44212c07a1eb252e813e0efebaa583cb65efb21294fce with infra container: kube-system/kube-proxy-2psfb/POD" id=da4aed46-b611-4ae9-be59-9b94b561ee14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.35862444Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d431e3e9-df02-46d3-acdf-3d3d03318d27 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.361134986Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=26b8072b-16ac-4b9c-922f-a08ce6dc4fa5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.367099891Z" level=info msg="Creating container: kube-system/kube-proxy-2psfb/kube-proxy" id=6e83f7d1-068e-4435-a5e3-184f13046618 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.367228409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.373622191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.374460985Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.402597017Z" level=info msg="Created container 65da9a905f0b90851fefbf53a7e3e01807fa521d6213012770bf897c7673870a: kube-system/kube-proxy-2psfb/kube-proxy" id=6e83f7d1-068e-4435-a5e3-184f13046618 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.404069375Z" level=info msg="Starting container: 65da9a905f0b90851fefbf53a7e3e01807fa521d6213012770bf897c7673870a" id=5e61d4a2-939e-4b87-a0ee-078621df2355 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.406001971Z" level=info msg="Running pod sandbox: kube-system/kindnet-mlxls/POD" id=df472ceb-2a80-4991-9af2-3b2f5745e016 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.406056323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.410659549Z" level=info msg="Started container" PID=1491 containerID=65da9a905f0b90851fefbf53a7e3e01807fa521d6213012770bf897c7673870a description=kube-system/kube-proxy-2psfb/kube-proxy id=5e61d4a2-939e-4b87-a0ee-078621df2355 name=/runtime.v1.RuntimeService/StartContainer sandboxID=04063beea8f30a38d5f44212c07a1eb252e813e0efebaa583cb65efb21294fce
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.411004266Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=df472ceb-2a80-4991-9af2-3b2f5745e016 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.441941702Z" level=info msg="Ran pod sandbox 2183e305a36e522b31896da5ad56fa66b9a9411ede64754840fab9abb97f22eb with infra container: kube-system/kindnet-mlxls/POD" id=df472ceb-2a80-4991-9af2-3b2f5745e016 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.447116273Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=da5ffc36-d85d-4caf-9fe6-ff37edad8069 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.448213284Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7187c62a-767f-4a03-821c-57bad4475dfe name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.457785632Z" level=info msg="Creating container: kube-system/kindnet-mlxls/kindnet-cni" id=941adc5c-beea-4da0-bf7b-1ce065def80f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.457915914Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.475489091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.481394212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.514852222Z" level=info msg="Created container e9021bebea24f8777eebdc6594c607fe7ebfa3806868beccd4d8b4497eb371a5: kube-system/kindnet-mlxls/kindnet-cni" id=941adc5c-beea-4da0-bf7b-1ce065def80f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.51577255Z" level=info msg="Starting container: e9021bebea24f8777eebdc6594c607fe7ebfa3806868beccd4d8b4497eb371a5" id=7720e585-5fc5-4271-b69f-9841b4391bb4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:52:34 newest-cni-196911 crio[838]: time="2025-11-01T10:52:34.517469304Z" level=info msg="Started container" PID=1504 containerID=e9021bebea24f8777eebdc6594c607fe7ebfa3806868beccd4d8b4497eb371a5 description=kube-system/kindnet-mlxls/kindnet-cni id=7720e585-5fc5-4271-b69f-9841b4391bb4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2183e305a36e522b31896da5ad56fa66b9a9411ede64754840fab9abb97f22eb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e9021bebea24f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   2183e305a36e5       kindnet-mlxls                               kube-system
	65da9a905f0b9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   04063beea8f30       kube-proxy-2psfb                            kube-system
	0a573db9a883e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   a6bbf487b7d8b       kube-apiserver-newest-cni-196911            kube-system
	fed73f39c2635       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   99da9b21d3dc7       kube-controller-manager-newest-cni-196911   kube-system
	ca69f394b55b6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   b5376f652af81       kube-scheduler-newest-cni-196911            kube-system
	b1961f422bcc6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   7757d23a9f3cf       etcd-newest-cni-196911                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-196911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-196911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=newest-cni-196911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_52_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:52:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-196911
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:52:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:52:28 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:52:28 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:52:28 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:52:28 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-196911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1a13745d-d4b0-4a25-a286-6bb43ff747ac
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-196911                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-mlxls                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-196911             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-196911    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-2psfb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-196911             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 2s    kube-proxy       
	  Normal   Starting                 8s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s    kubelet          Node newest-cni-196911 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-196911 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s    kubelet          Node newest-cni-196911 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s    node-controller  Node newest-cni-196911 event: Registered Node newest-cni-196911 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +16.895624] overlayfs: idmapped layers are currently not supported
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[ +42.559605] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:50] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:51] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:52] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b1961f422bcc69625d567e68dea47c60dcc8142daf8377980cc8cde805d7fd16] <==
	{"level":"warn","ts":"2025-11-01T10:52:24.249960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.267384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.283192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.307176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.331628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.342109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.365908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.388997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.401646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.425960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.441787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.459761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.472255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.490733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.510281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.532472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.550948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.567565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.585160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.598202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.621189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.651391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.670304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.698087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:24.820245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33986","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:52:36 up  2:35,  0 user,  load average: 4.91, 3.97, 3.09
	Linux newest-cni-196911 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9021bebea24f8777eebdc6594c607fe7ebfa3806868beccd4d8b4497eb371a5] <==
	I1101 10:52:34.641526       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:52:34.641778       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:52:34.641901       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:52:34.641913       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:52:34.641926       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:52:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:52:34.834981       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:52:34.835067       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:52:34.835113       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:52:34.835435       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [0a573db9a883ec7d426198b15254b2de28c043ebdaecfbf93dc3e5efb30f5b7b] <==
	I1101 10:52:25.742939       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:52:25.743024       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:52:25.750257       1 controller.go:667] quota admission added evaluator for: namespaces
	E1101 10:52:25.753512       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1101 10:52:25.753573       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1101 10:52:25.760983       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:52:25.761274       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:52:25.956786       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:52:26.449175       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:52:26.456300       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:52:26.456327       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:52:27.242544       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:52:27.295220       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:52:27.364545       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:52:27.375899       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 10:52:27.377065       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:52:27.382232       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:52:27.576185       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:52:28.454502       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:52:28.482552       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:52:28.496732       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:52:33.503988       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:52:33.681282       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:52:33.781320       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:52:33.818337       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [fed73f39c263552cd6253e142aed7e10b7be91936a5d62be8e348b96c4d110d7] <==
	I1101 10:52:32.709198       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:52:32.709229       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:52:32.709234       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:52:32.709240       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:52:32.717056       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:52:32.725176       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:52:32.725259       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:52:32.725518       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:52:32.725630       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:52:32.725675       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:52:32.725753       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:52:32.729109       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:52:32.729190       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:52:32.729296       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-196911"
	I1101 10:52:32.729359       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:52:32.733019       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:52:32.734494       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:52:32.756494       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:52:32.776788       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:52:32.787159       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:52:32.803145       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-196911" podCIDRs=["10.42.0.0/24"]
	I1101 10:52:32.823136       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:52:32.823167       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:52:32.823174       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:52:32.868196       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [65da9a905f0b90851fefbf53a7e3e01807fa521d6213012770bf897c7673870a] <==
	I1101 10:52:34.503656       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:52:34.622559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:52:34.723116       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:52:34.723152       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:52:34.723227       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:52:34.750671       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:52:34.750794       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:52:34.755392       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:52:34.755768       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:52:34.755936       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:52:34.757592       1 config.go:200] "Starting service config controller"
	I1101 10:52:34.757670       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:52:34.757714       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:52:34.757743       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:52:34.757783       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:52:34.757812       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:52:34.758613       1 config.go:309] "Starting node config controller"
	I1101 10:52:34.758673       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:52:34.758703       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:52:34.862683       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:52:34.863223       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:52:34.865370       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ca69f394b55b6ce6cee6ca359719d497416bbea019054fe7af75a97f3819b3e7] <==
	E1101 10:52:25.634117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:52:25.634163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:52:25.634207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:52:25.634253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:52:25.634294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:52:25.634340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:52:25.634386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:52:25.634431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:52:25.634478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:52:25.634522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:52:25.634562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:52:26.479569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:52:26.484730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:52:26.495366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:52:26.519056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:52:26.524707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:52:26.658525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:52:26.727553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:52:26.749067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:52:26.806935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:52:26.821958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:52:26.834446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:52:26.907045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:52:27.127646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 10:52:29.092132       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:52:28 newest-cni-196911 kubelet[1315]: I1101 10:52:28.848701    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85537aadd646d25c0458792c25a3300f-ca-certs\") pod \"kube-controller-manager-newest-cni-196911\" (UID: \"85537aadd646d25c0458792c25a3300f\") " pod="kube-system/kube-controller-manager-newest-cni-196911"
	Nov 01 10:52:28 newest-cni-196911 kubelet[1315]: I1101 10:52:28.848720    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/85537aadd646d25c0458792c25a3300f-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-196911\" (UID: \"85537aadd646d25c0458792c25a3300f\") " pod="kube-system/kube-controller-manager-newest-cni-196911"
	Nov 01 10:52:29 newest-cni-196911 kubelet[1315]: I1101 10:52:29.392114    1315 apiserver.go:52] "Watching apiserver"
	Nov 01 10:52:29 newest-cni-196911 kubelet[1315]: I1101 10:52:29.440510    1315 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:52:29 newest-cni-196911 kubelet[1315]: I1101 10:52:29.554105    1315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-196911"
	Nov 01 10:52:29 newest-cni-196911 kubelet[1315]: I1101 10:52:29.554977    1315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-196911"
	Nov 01 10:52:29 newest-cni-196911 kubelet[1315]: E1101 10:52:29.570062    1315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-196911\" already exists" pod="kube-system/kube-apiserver-newest-cni-196911"
	Nov 01 10:52:29 newest-cni-196911 kubelet[1315]: E1101 10:52:29.576115    1315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-196911\" already exists" pod="kube-system/etcd-newest-cni-196911"
	Nov 01 10:52:29 newest-cni-196911 kubelet[1315]: I1101 10:52:29.600197    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-196911" podStartSLOduration=1.600181963 podStartE2EDuration="1.600181963s" podCreationTimestamp="2025-11-01 10:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:52:29.59995001 +0000 UTC m=+1.294305871" watchObservedRunningTime="2025-11-01 10:52:29.600181963 +0000 UTC m=+1.294537816"
	Nov 01 10:52:29 newest-cni-196911 kubelet[1315]: I1101 10:52:29.616079    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-196911" podStartSLOduration=1.616043436 podStartE2EDuration="1.616043436s" podCreationTimestamp="2025-11-01 10:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:52:29.615755531 +0000 UTC m=+1.310111392" watchObservedRunningTime="2025-11-01 10:52:29.616043436 +0000 UTC m=+1.310399288"
	Nov 01 10:52:29 newest-cni-196911 kubelet[1315]: I1101 10:52:29.652546    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-196911" podStartSLOduration=1.6525150690000001 podStartE2EDuration="1.652515069s" podCreationTimestamp="2025-11-01 10:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:52:29.636336646 +0000 UTC m=+1.330692499" watchObservedRunningTime="2025-11-01 10:52:29.652515069 +0000 UTC m=+1.346870930"
	Nov 01 10:52:29 newest-cni-196911 kubelet[1315]: I1101 10:52:29.667605    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-196911" podStartSLOduration=1.667585174 podStartE2EDuration="1.667585174s" podCreationTimestamp="2025-11-01 10:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:52:29.653944104 +0000 UTC m=+1.348299973" watchObservedRunningTime="2025-11-01 10:52:29.667585174 +0000 UTC m=+1.361941035"
	Nov 01 10:52:32 newest-cni-196911 kubelet[1315]: I1101 10:52:32.827553    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:52:32 newest-cni-196911 kubelet[1315]: I1101 10:52:32.829969    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:52:34 newest-cni-196911 kubelet[1315]: I1101 10:52:34.131320    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8vwf\" (UniqueName: \"kubernetes.io/projected/0d6d41c4-8fef-48d4-ab11-4f2c76c278e6-kube-api-access-j8vwf\") pod \"kindnet-mlxls\" (UID: \"0d6d41c4-8fef-48d4-ab11-4f2c76c278e6\") " pod="kube-system/kindnet-mlxls"
	Nov 01 10:52:34 newest-cni-196911 kubelet[1315]: I1101 10:52:34.131370    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn22n\" (UniqueName: \"kubernetes.io/projected/fc92af6a-7726-496b-8f2c-e315e3065bf2-kube-api-access-xn22n\") pod \"kube-proxy-2psfb\" (UID: \"fc92af6a-7726-496b-8f2c-e315e3065bf2\") " pod="kube-system/kube-proxy-2psfb"
	Nov 01 10:52:34 newest-cni-196911 kubelet[1315]: I1101 10:52:34.131397    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d6d41c4-8fef-48d4-ab11-4f2c76c278e6-lib-modules\") pod \"kindnet-mlxls\" (UID: \"0d6d41c4-8fef-48d4-ab11-4f2c76c278e6\") " pod="kube-system/kindnet-mlxls"
	Nov 01 10:52:34 newest-cni-196911 kubelet[1315]: I1101 10:52:34.131418    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc92af6a-7726-496b-8f2c-e315e3065bf2-kube-proxy\") pod \"kube-proxy-2psfb\" (UID: \"fc92af6a-7726-496b-8f2c-e315e3065bf2\") " pod="kube-system/kube-proxy-2psfb"
	Nov 01 10:52:34 newest-cni-196911 kubelet[1315]: I1101 10:52:34.131438    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc92af6a-7726-496b-8f2c-e315e3065bf2-lib-modules\") pod \"kube-proxy-2psfb\" (UID: \"fc92af6a-7726-496b-8f2c-e315e3065bf2\") " pod="kube-system/kube-proxy-2psfb"
	Nov 01 10:52:34 newest-cni-196911 kubelet[1315]: I1101 10:52:34.131454    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d6d41c4-8fef-48d4-ab11-4f2c76c278e6-xtables-lock\") pod \"kindnet-mlxls\" (UID: \"0d6d41c4-8fef-48d4-ab11-4f2c76c278e6\") " pod="kube-system/kindnet-mlxls"
	Nov 01 10:52:34 newest-cni-196911 kubelet[1315]: I1101 10:52:34.131473    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0d6d41c4-8fef-48d4-ab11-4f2c76c278e6-cni-cfg\") pod \"kindnet-mlxls\" (UID: \"0d6d41c4-8fef-48d4-ab11-4f2c76c278e6\") " pod="kube-system/kindnet-mlxls"
	Nov 01 10:52:34 newest-cni-196911 kubelet[1315]: I1101 10:52:34.131488    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc92af6a-7726-496b-8f2c-e315e3065bf2-xtables-lock\") pod \"kube-proxy-2psfb\" (UID: \"fc92af6a-7726-496b-8f2c-e315e3065bf2\") " pod="kube-system/kube-proxy-2psfb"
	Nov 01 10:52:34 newest-cni-196911 kubelet[1315]: I1101 10:52:34.275888    1315 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:52:34 newest-cni-196911 kubelet[1315]: I1101 10:52:34.643629    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mlxls" podStartSLOduration=1.643610349 podStartE2EDuration="1.643610349s" podCreationTimestamp="2025-11-01 10:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:52:34.609660225 +0000 UTC m=+6.304016077" watchObservedRunningTime="2025-11-01 10:52:34.643610349 +0000 UTC m=+6.337966210"
	Nov 01 10:52:34 newest-cni-196911 kubelet[1315]: I1101 10:52:34.643937    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2psfb" podStartSLOduration=1.643929261 podStartE2EDuration="1.643929261s" podCreationTimestamp="2025-11-01 10:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:52:34.643542664 +0000 UTC m=+6.337898542" watchObservedRunningTime="2025-11-01 10:52:34.643929261 +0000 UTC m=+6.338285114"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-196911 -n newest-cni-196911
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-196911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-nrbdx storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-196911 describe pod coredns-66bc5c9577-nrbdx storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-196911 describe pod coredns-66bc5c9577-nrbdx storage-provisioner: exit status 1 (81.314569ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-nrbdx" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-196911 describe pod coredns-66bc5c9577-nrbdx storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-196911 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-196911 --alsologtostderr -v=1: exit status 80 (3.011288791s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-196911 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:52:59.727565  503521 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:52:59.728149  503521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:59.728179  503521 out.go:374] Setting ErrFile to fd 2...
	I1101 10:52:59.728198  503521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:59.728489  503521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:52:59.728770  503521 out.go:368] Setting JSON to false
	I1101 10:52:59.728817  503521 mustload.go:66] Loading cluster: newest-cni-196911
	I1101 10:52:59.733321  503521 config.go:182] Loaded profile config "newest-cni-196911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:59.733846  503521 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:59.767356  503521 host.go:66] Checking if "newest-cni-196911" exists ...
	I1101 10:52:59.767721  503521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:52:59.874157  503521 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 10:52:59.861913785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:52:59.874825  503521 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-196911 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:52:59.878593  503521 out.go:179] * Pausing node newest-cni-196911 ... 
	I1101 10:52:59.881605  503521 host.go:66] Checking if "newest-cni-196911" exists ...
	I1101 10:52:59.881960  503521 ssh_runner.go:195] Run: systemctl --version
	I1101 10:52:59.882011  503521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:59.918723  503521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:53:00.058803  503521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:53:00.085052  503521 pause.go:52] kubelet running: true
	I1101 10:53:00.085177  503521 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:53:00.684327  503521 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:53:00.684440  503521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:53:00.857409  503521 cri.go:89] found id: "a88721b15f260f8a89963171573d649f82fb4cc278159302fc14d86567df47ff"
	I1101 10:53:00.857485  503521 cri.go:89] found id: "fdfa04af4c179f64d30bb2228675311ae707dab7e617e03be48fff445c30bbf9"
	I1101 10:53:00.857505  503521 cri.go:89] found id: "292d0cbb536acc09cd84b96d1b822feb61d97070a176a9932123014a40ee60cb"
	I1101 10:53:00.857525  503521 cri.go:89] found id: "43cff061f63df9268ac8b9a55804a126d15f4a912d0b682729bc41fab87e54d4"
	I1101 10:53:00.857562  503521 cri.go:89] found id: "4c9e83d09d804cacddc0212f96f7746196a7c47d338ed0e9519993cbb75d1314"
	I1101 10:53:00.857582  503521 cri.go:89] found id: "6ee79706bb2c3b2a369e20eed26ccdb5985aa7c70ae1cd34024086e323278927"
	I1101 10:53:00.857604  503521 cri.go:89] found id: ""
	I1101 10:53:00.857684  503521 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:53:00.893661  503521 retry.go:31] will retry after 275.758227ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:53:00Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:53:01.170174  503521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:53:01.210268  503521 pause.go:52] kubelet running: false
	I1101 10:53:01.210371  503521 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:53:01.515399  503521 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:53:01.515492  503521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:53:01.738425  503521 cri.go:89] found id: "a88721b15f260f8a89963171573d649f82fb4cc278159302fc14d86567df47ff"
	I1101 10:53:01.738499  503521 cri.go:89] found id: "fdfa04af4c179f64d30bb2228675311ae707dab7e617e03be48fff445c30bbf9"
	I1101 10:53:01.738518  503521 cri.go:89] found id: "292d0cbb536acc09cd84b96d1b822feb61d97070a176a9932123014a40ee60cb"
	I1101 10:53:01.738538  503521 cri.go:89] found id: "43cff061f63df9268ac8b9a55804a126d15f4a912d0b682729bc41fab87e54d4"
	I1101 10:53:01.738571  503521 cri.go:89] found id: "4c9e83d09d804cacddc0212f96f7746196a7c47d338ed0e9519993cbb75d1314"
	I1101 10:53:01.738596  503521 cri.go:89] found id: "6ee79706bb2c3b2a369e20eed26ccdb5985aa7c70ae1cd34024086e323278927"
	I1101 10:53:01.738615  503521 cri.go:89] found id: ""
	I1101 10:53:01.738694  503521 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:53:01.760413  503521 retry.go:31] will retry after 253.398334ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:53:01Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:53:02.014741  503521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:53:02.043039  503521 pause.go:52] kubelet running: false
	I1101 10:53:02.043139  503521 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:53:02.394447  503521 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:53:02.394540  503521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:53:02.587027  503521 cri.go:89] found id: "a88721b15f260f8a89963171573d649f82fb4cc278159302fc14d86567df47ff"
	I1101 10:53:02.587057  503521 cri.go:89] found id: "fdfa04af4c179f64d30bb2228675311ae707dab7e617e03be48fff445c30bbf9"
	I1101 10:53:02.587062  503521 cri.go:89] found id: "292d0cbb536acc09cd84b96d1b822feb61d97070a176a9932123014a40ee60cb"
	I1101 10:53:02.587066  503521 cri.go:89] found id: "43cff061f63df9268ac8b9a55804a126d15f4a912d0b682729bc41fab87e54d4"
	I1101 10:53:02.587070  503521 cri.go:89] found id: "4c9e83d09d804cacddc0212f96f7746196a7c47d338ed0e9519993cbb75d1314"
	I1101 10:53:02.587073  503521 cri.go:89] found id: "6ee79706bb2c3b2a369e20eed26ccdb5985aa7c70ae1cd34024086e323278927"
	I1101 10:53:02.587076  503521 cri.go:89] found id: ""
	I1101 10:53:02.587127  503521 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:53:02.632481  503521 out.go:203] 
	W1101 10:53:02.636202  503521 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:53:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:53:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:53:02.636236  503521 out.go:285] * 
	* 
	W1101 10:53:02.642226  503521 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:53:02.646076  503521 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-196911 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-196911
helpers_test.go:243: (dbg) docker inspect newest-cni-196911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8",
	        "Created": "2025-11-01T10:51:57.909472706Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500402,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:52:39.650076766Z",
	            "FinishedAt": "2025-11-01T10:52:38.688875583Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/hostname",
	        "HostsPath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/hosts",
	        "LogPath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8-json.log",
	        "Name": "/newest-cni-196911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-196911:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-196911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8",
	                "LowerDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-196911",
	                "Source": "/var/lib/docker/volumes/newest-cni-196911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-196911",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-196911",
	                "name.minikube.sigs.k8s.io": "newest-cni-196911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32e5dca3a74dd0b1be2be7d90351d66259d21d75fe22d4edca6510aaaf1c4188",
	            "SandboxKey": "/var/run/docker/netns/32e5dca3a74d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-196911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:14:3a:09:09:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e268685f915b61a03d0e4cd44fdcaaee41eecaa2cf061bd3b1cfc552fbc84998",
	                    "EndpointID": "06c4792348b03b7453cb4cc8241149eca6eae41534e470f189acadf178cb50d6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-196911",
	                        "017ea6857675"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-196911 -n newest-cni-196911
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-196911 -n newest-cni-196911: exit status 2 (512.015756ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-196911 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-196911 logs -n 25: (1.700938387s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ stop    │ -p embed-certs-499088 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable dashboard -p embed-certs-499088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:51 UTC │
	│ image   │ default-k8s-diff-port-014050 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ pause   │ -p default-k8s-diff-port-014050 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p disable-driver-mounts-514829                                                                                                                                                                                                               │ disable-driver-mounts-514829 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:52 UTC │
	│ image   │ embed-certs-499088 image list --format=json                                                                                                                                                                                                   │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ pause   │ -p embed-certs-499088 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-548708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ stop    │ -p no-preload-548708 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable metrics-server -p newest-cni-196911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ stop    │ -p newest-cni-196911 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable dashboard -p newest-cni-196911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ start   │ -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable dashboard -p no-preload-548708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ image   │ newest-cni-196911 image list --format=json                                                                                                                                                                                                    │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ pause   │ -p newest-cni-196911 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:52:46
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:52:46.080449  501404 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:52:46.080568  501404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:46.080573  501404 out.go:374] Setting ErrFile to fd 2...
	I1101 10:52:46.080578  501404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:46.081885  501404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:52:46.082317  501404 out.go:368] Setting JSON to false
	I1101 10:52:46.083206  501404 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9318,"bootTime":1761985048,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:52:46.083291  501404 start.go:143] virtualization:  
	I1101 10:52:46.086775  501404 out.go:179] * [no-preload-548708] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:52:46.090668  501404 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:52:46.090833  501404 notify.go:221] Checking for updates...
	I1101 10:52:46.097246  501404 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:52:46.100088  501404 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:46.102854  501404 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:52:46.106548  501404 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:52:46.109447  501404 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:52:46.112823  501404 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:46.113447  501404 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:52:46.153891  501404 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:52:46.154028  501404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:52:46.245708  501404 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:52:46.234362529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:52:46.245844  501404 docker.go:319] overlay module found
	I1101 10:52:46.249158  501404 out.go:179] * Using the docker driver based on existing profile
	I1101 10:52:46.252304  501404 start.go:309] selected driver: docker
	I1101 10:52:46.252326  501404 start.go:930] validating driver "docker" against &{Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:52:46.252468  501404 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:52:46.253626  501404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:52:46.351692  501404 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:52:46.337682636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:52:46.352044  501404 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:52:46.352072  501404 cni.go:84] Creating CNI manager for ""
	I1101 10:52:46.352127  501404 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:52:46.352169  501404 start.go:353] cluster config:
	{Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:52:46.355586  501404 out.go:179] * Starting "no-preload-548708" primary control-plane node in "no-preload-548708" cluster
	I1101 10:52:46.358537  501404 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:52:46.361517  501404 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:52:46.364444  501404 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:52:46.364627  501404 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/config.json ...
	I1101 10:52:46.365014  501404 cache.go:107] acquiring lock: {Name:mk87c12063bfe6477c1b6ed8fc827cc60e9ca811 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365097  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 10:52:46.365105  501404 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.659µs
	I1101 10:52:46.365113  501404 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 10:52:46.365125  501404 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:52:46.365259  501404 cache.go:107] acquiring lock: {Name:mk4c1242d2913ae89c6c2d48e391247cfb4b6c0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365322  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 10:52:46.365330  501404 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 76.301µs
	I1101 10:52:46.365338  501404 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 10:52:46.365349  501404 cache.go:107] acquiring lock: {Name:mk98f5306fba9c79ff24fb30add0aac4b2ea9d11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365391  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 10:52:46.365398  501404 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 50.2µs
	I1101 10:52:46.365404  501404 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 10:52:46.365415  501404 cache.go:107] acquiring lock: {Name:mkd94f63e239c14a2fc215ef4549c0b3008ae371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365442  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 10:52:46.365447  501404 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 33.51µs
	I1101 10:52:46.365454  501404 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 10:52:46.365469  501404 cache.go:107] acquiring lock: {Name:mke31e546546420a97a22fb575f397eaa8d20c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365495  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 10:52:46.365500  501404 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 32.649µs
	I1101 10:52:46.365506  501404 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 10:52:46.365516  501404 cache.go:107] acquiring lock: {Name:mk3196340dda3f6ca3036b488f880ffd822482f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365541  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 10:52:46.365546  501404 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.278µs
	I1101 10:52:46.365552  501404 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 10:52:46.365561  501404 cache.go:107] acquiring lock: {Name:mkf516acd2e5d0c72111e5669f8226bc99c3850c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365586  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1101 10:52:46.365591  501404 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.058µs
	I1101 10:52:46.365601  501404 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 10:52:46.365611  501404 cache.go:107] acquiring lock: {Name:mk0a26100d6da9ffb6e62c9df95140af96aec6f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365636  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 10:52:46.365666  501404 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 54.934µs
	I1101 10:52:46.365673  501404 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 10:52:46.365680  501404 cache.go:87] Successfully saved all images to host disk.
	I1101 10:52:46.398295  501404 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:52:46.398314  501404 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:52:46.398326  501404 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:52:46.398357  501404 start.go:360] acquireMachinesLock for no-preload-548708: {Name:mk9ab5039a75ce95aea667171fcdfabc6fc7786c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.398408  501404 start.go:364] duration metric: took 35.357µs to acquireMachinesLock for "no-preload-548708"
	I1101 10:52:46.398428  501404 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:52:46.398433  501404 fix.go:54] fixHost starting: 
	I1101 10:52:46.398708  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:46.415637  501404 fix.go:112] recreateIfNeeded on no-preload-548708: state=Stopped err=<nil>
	W1101 10:52:46.415682  501404 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:52:46.033665  500273 kubeadm.go:884] updating cluster {Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:52:46.033810  500273 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:52:46.033890  500273 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:52:46.073361  500273 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:52:46.073383  500273 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:52:46.073450  500273 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:52:46.104506  500273 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:52:46.104525  500273 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:52:46.104534  500273 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:52:46.104632  500273 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-196911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:52:46.104718  500273 ssh_runner.go:195] Run: crio config
	I1101 10:52:46.174117  500273 cni.go:84] Creating CNI manager for ""
	I1101 10:52:46.174201  500273 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:52:46.174234  500273 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:52:46.174299  500273 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-196911 NodeName:newest-cni-196911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:52:46.174486  500273 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-196911"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:52:46.174610  500273 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:52:46.184423  500273 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:52:46.184496  500273 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:52:46.193320  500273 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:52:46.208209  500273 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:52:46.223083  500273 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 10:52:46.238186  500273 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:52:46.243599  500273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:52:46.258029  500273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:46.416685  500273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:46.453035  500273 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911 for IP: 192.168.76.2
	I1101 10:52:46.453055  500273 certs.go:195] generating shared ca certs ...
	I1101 10:52:46.453072  500273 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:46.453272  500273 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:52:46.453341  500273 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:52:46.453349  500273 certs.go:257] generating profile certs ...
	I1101 10:52:46.453451  500273 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.key
	I1101 10:52:46.453530  500273 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key.415499af
	I1101 10:52:46.453571  500273 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key
	I1101 10:52:46.453683  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:52:46.453721  500273 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:52:46.453735  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:52:46.453763  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:52:46.453787  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:52:46.453808  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:52:46.453852  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:52:46.456074  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:52:46.488110  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:52:46.518144  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:52:46.548126  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:52:46.604146  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:52:46.657284  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:52:46.704900  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:52:46.749966  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:52:46.801077  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:52:46.827858  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:52:46.874151  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:52:46.918968  500273 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:52:46.950919  500273 ssh_runner.go:195] Run: openssl version
	I1101 10:52:46.958806  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:52:46.967906  500273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:52:46.972294  500273 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:52:46.972379  500273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:52:47.022543  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:52:47.033861  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:52:47.048687  500273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:52:47.054179  500273 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:52:47.054243  500273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:52:47.107053  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:52:47.120506  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:52:47.132088  500273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:47.139954  500273 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:47.140046  500273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:47.193377  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:52:47.202504  500273 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:52:47.209688  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:52:47.271192  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:52:47.383716  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:52:47.480428  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:52:47.609906  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:52:47.815843  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:52:47.905942  500273 kubeadm.go:401] StartCluster: {Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:52:47.906044  500273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:52:47.906145  500273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:52:47.964304  500273 cri.go:89] found id: "292d0cbb536acc09cd84b96d1b822feb61d97070a176a9932123014a40ee60cb"
	I1101 10:52:47.964327  500273 cri.go:89] found id: "43cff061f63df9268ac8b9a55804a126d15f4a912d0b682729bc41fab87e54d4"
	I1101 10:52:47.964333  500273 cri.go:89] found id: "4c9e83d09d804cacddc0212f96f7746196a7c47d338ed0e9519993cbb75d1314"
	I1101 10:52:47.964337  500273 cri.go:89] found id: "6ee79706bb2c3b2a369e20eed26ccdb5985aa7c70ae1cd34024086e323278927"
	I1101 10:52:47.964340  500273 cri.go:89] found id: ""
	I1101 10:52:47.964414  500273 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:52:47.987966  500273 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:47Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:52:47.988108  500273 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:52:48.002378  500273 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:52:48.002457  500273 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:52:48.002550  500273 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:52:48.019575  500273 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:52:48.020156  500273 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-196911" does not appear in /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:48.020335  500273 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-292445/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-196911" cluster setting kubeconfig missing "newest-cni-196911" context setting]
	I1101 10:52:48.020706  500273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:48.022566  500273 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:52:48.036347  500273 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:52:48.036446  500273 kubeadm.go:602] duration metric: took 33.965493ms to restartPrimaryControlPlane
	I1101 10:52:48.036472  500273 kubeadm.go:403] duration metric: took 130.541192ms to StartCluster
	I1101 10:52:48.036502  500273 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:48.036610  500273 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:48.037441  500273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:48.037743  500273 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:52:48.038167  500273 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:52:48.038244  500273 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-196911"
	I1101 10:52:48.038258  500273 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-196911"
	W1101 10:52:48.038264  500273 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:52:48.038288  500273 host.go:66] Checking if "newest-cni-196911" exists ...
	I1101 10:52:48.038766  500273 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:48.039252  500273 config.go:182] Loaded profile config "newest-cni-196911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:48.039351  500273 addons.go:70] Setting dashboard=true in profile "newest-cni-196911"
	I1101 10:52:48.039394  500273 addons.go:239] Setting addon dashboard=true in "newest-cni-196911"
	W1101 10:52:48.039415  500273 addons.go:248] addon dashboard should already be in state true
	I1101 10:52:48.039468  500273 host.go:66] Checking if "newest-cni-196911" exists ...
	I1101 10:52:48.040025  500273 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:48.043766  500273 addons.go:70] Setting default-storageclass=true in profile "newest-cni-196911"
	I1101 10:52:48.044027  500273 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-196911"
	I1101 10:52:48.043952  500273 out.go:179] * Verifying Kubernetes components...
	I1101 10:52:48.049750  500273 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:48.051055  500273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:48.099762  500273 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:52:48.102601  500273 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:52:48.105515  500273 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:48.105539  500273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:52:48.105607  500273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:48.105773  500273 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:52:48.109178  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:52:48.109203  500273 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:52:48.109271  500273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:48.133911  500273 addons.go:239] Setting addon default-storageclass=true in "newest-cni-196911"
	W1101 10:52:48.133937  500273 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:52:48.133964  500273 host.go:66] Checking if "newest-cni-196911" exists ...
	I1101 10:52:48.134416  500273 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:48.178692  500273 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:48.178715  500273 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:52:48.178780  500273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:48.179404  500273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:48.194202  500273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:48.218652  500273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:48.399293  500273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:48.429978  500273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:48.453318  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:52:48.453344  500273 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:52:48.466326  500273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:48.516831  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:52:48.516914  500273 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:52:48.639727  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:52:48.639754  500273 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:52:48.706194  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:52:48.706217  500273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:52:48.740643  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:52:48.740669  500273 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:52:48.770297  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:52:48.770325  500273 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:52:48.793949  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:52:48.793975  500273 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:52:48.816342  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:52:48.816370  500273 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:52:48.838786  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:52:48.838812  500273 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:52:48.862148  500273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:52:46.424478  501404 out.go:252] * Restarting existing docker container for "no-preload-548708" ...
	I1101 10:52:46.424569  501404 cli_runner.go:164] Run: docker start no-preload-548708
	I1101 10:52:46.846955  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:46.879533  501404 kic.go:430] container "no-preload-548708" state is running.
	I1101 10:52:46.879949  501404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:52:46.906877  501404 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/config.json ...
	I1101 10:52:46.907112  501404 machine.go:94] provisionDockerMachine start ...
	I1101 10:52:46.907173  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:46.932490  501404 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:46.932809  501404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1101 10:52:46.932825  501404 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:52:46.934045  501404 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:52:50.121004  501404 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-548708
	
	I1101 10:52:50.121045  501404 ubuntu.go:182] provisioning hostname "no-preload-548708"
	I1101 10:52:50.121149  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:50.154720  501404 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:50.155043  501404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1101 10:52:50.155062  501404 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-548708 && echo "no-preload-548708" | sudo tee /etc/hostname
	I1101 10:52:50.340814  501404 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-548708
	
	I1101 10:52:50.340959  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:50.362002  501404 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:50.362314  501404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1101 10:52:50.362332  501404 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-548708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-548708/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-548708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:52:50.533264  501404 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:52:50.533317  501404 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:52:50.533344  501404 ubuntu.go:190] setting up certificates
	I1101 10:52:50.533365  501404 provision.go:84] configureAuth start
	I1101 10:52:50.533437  501404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:52:50.564138  501404 provision.go:143] copyHostCerts
	I1101 10:52:50.564215  501404 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:52:50.564236  501404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:52:50.564312  501404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:52:50.564429  501404 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:52:50.564441  501404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:52:50.564469  501404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:52:50.564529  501404 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:52:50.564539  501404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:52:50.564563  501404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:52:50.564623  501404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.no-preload-548708 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-548708]
	I1101 10:52:51.496017  501404 provision.go:177] copyRemoteCerts
	I1101 10:52:51.496088  501404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:52:51.496145  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:51.516596  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:51.641650  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:52:51.677386  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:52:51.715747  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:52:51.755457  501404 provision.go:87] duration metric: took 1.222062982s to configureAuth
	I1101 10:52:51.755492  501404 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:52:51.755725  501404 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:51.755857  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:51.785158  501404 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:51.785469  501404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1101 10:52:51.785497  501404 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:52:52.250822  501404 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:52:52.250886  501404 machine.go:97] duration metric: took 5.343764066s to provisionDockerMachine
	I1101 10:52:52.250912  501404 start.go:293] postStartSetup for "no-preload-548708" (driver="docker")
	I1101 10:52:52.250963  501404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:52:52.251083  501404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:52:52.251149  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:52.282516  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:52.403356  501404 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:52:52.407304  501404 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:52:52.407331  501404 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:52:52.407342  501404 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:52:52.407398  501404 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:52:52.407472  501404 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:52:52.407580  501404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:52:52.419042  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:52:52.450094  501404 start.go:296] duration metric: took 199.139474ms for postStartSetup
	I1101 10:52:52.450254  501404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:52:52.450320  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:52.478641  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:52.602405  501404 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:52:52.609422  501404 fix.go:56] duration metric: took 6.210981629s for fixHost
	I1101 10:52:52.609444  501404 start.go:83] releasing machines lock for "no-preload-548708", held for 6.211027512s
	I1101 10:52:52.609515  501404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:52:52.638687  501404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:52:52.638773  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:52.639006  501404 ssh_runner.go:195] Run: cat /version.json
	I1101 10:52:52.639141  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:52.679140  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:52.681487  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:52.916068  501404 ssh_runner.go:195] Run: systemctl --version
	I1101 10:52:52.925659  501404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:52:53.011729  501404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:52:53.025578  501404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:52:53.025696  501404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:52:53.035024  501404 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:52:53.035131  501404 start.go:496] detecting cgroup driver to use...
	I1101 10:52:53.035208  501404 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:52:53.035284  501404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:52:53.064494  501404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:52:53.085091  501404 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:52:53.085203  501404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:52:53.120410  501404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:52:53.136706  501404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:52:53.349004  501404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:52:53.554859  501404 docker.go:234] disabling docker service ...
	I1101 10:52:53.554982  501404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:52:53.577647  501404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:52:53.590836  501404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:52:53.795957  501404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:52:53.988982  501404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:52:54.009982  501404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:52:54.036073  501404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:52:54.036258  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.047559  501404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:52:54.047716  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.058136  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.071584  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.099557  501404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:52:54.113029  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.133739  501404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.159843  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.174227  501404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:52:54.186715  501404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:52:54.196903  501404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:54.400680  501404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:52:54.602200  501404 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:52:54.602296  501404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:52:54.610872  501404 start.go:564] Will wait 60s for crictl version
	I1101 10:52:54.610953  501404 ssh_runner.go:195] Run: which crictl
	I1101 10:52:54.617642  501404 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:52:54.672531  501404 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:52:54.672650  501404 ssh_runner.go:195] Run: crio --version
	I1101 10:52:54.720721  501404 ssh_runner.go:195] Run: crio --version
	I1101 10:52:54.774708  501404 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:52:54.777697  501404 cli_runner.go:164] Run: docker network inspect no-preload-548708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:52:54.812213  501404 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:52:54.816502  501404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:52:54.827270  501404 kubeadm.go:884] updating cluster {Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:52:54.827382  501404 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:52:54.827439  501404 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:52:54.904843  501404 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:52:54.904863  501404 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:52:54.904878  501404 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:52:54.905027  501404 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-548708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:52:54.905103  501404 ssh_runner.go:195] Run: crio config
	I1101 10:52:55.023635  501404 cni.go:84] Creating CNI manager for ""
	I1101 10:52:55.023724  501404 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:52:55.023762  501404 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:52:55.023820  501404 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-548708 NodeName:no-preload-548708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:52:55.024013  501404 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-548708"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:52:55.024170  501404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:52:55.034600  501404 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:52:55.034725  501404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:52:55.044186  501404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:52:55.082410  501404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:52:55.106400  501404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 10:52:55.133448  501404 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:52:55.145401  501404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:52:55.164706  501404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:55.420091  501404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:55.470560  501404 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708 for IP: 192.168.85.2
	I1101 10:52:55.470627  501404 certs.go:195] generating shared ca certs ...
	I1101 10:52:55.470659  501404 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:55.470849  501404 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:52:55.470941  501404 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:52:55.470977  501404 certs.go:257] generating profile certs ...
	I1101 10:52:55.471128  501404 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.key
	I1101 10:52:55.471235  501404 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key.71cdcdd3
	I1101 10:52:55.471304  501404 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key
	I1101 10:52:55.471448  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:52:55.471503  501404 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:52:55.471528  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:52:55.471598  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:52:55.471650  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:52:55.471730  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:52:55.471812  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:52:55.472461  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:52:55.516276  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:52:55.573970  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:52:55.612983  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:52:55.661870  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:52:55.741455  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:52:55.775577  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:52:55.831644  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:52:55.869297  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:52:55.911783  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:52:55.948406  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:52:55.981093  501404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:52:56.002925  501404 ssh_runner.go:195] Run: openssl version
	I1101 10:52:56.014055  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:52:56.037412  501404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:56.042414  501404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:56.042501  501404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:56.109256  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:52:56.127857  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:52:56.141543  501404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:52:56.149809  501404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:52:56.149898  501404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:52:56.219839  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:52:56.235064  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:52:56.250356  501404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:52:56.258587  501404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:52:56.258670  501404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:52:56.322847  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:52:56.362189  501404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:52:56.377263  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:52:56.490293  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:52:56.612855  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:52:56.802744  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:52:56.903409  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:52:57.028667  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:52:57.115246  501404 kubeadm.go:401] StartCluster: {Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:52:57.115348  501404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:52:57.115435  501404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:52:57.186444  501404 cri.go:89] found id: "21b6a3d81852a5fbef2e31f92ee373c1322e58d33d0a4c6198b4f9654e688b41"
	I1101 10:52:57.186468  501404 cri.go:89] found id: "4d7c8dba98a1808a309fd3d7927f59223183ac53462318916d991ce724a3d765"
	I1101 10:52:57.186473  501404 cri.go:89] found id: "f5f4bd6b7426cda5e69e50ee4f6e6167b783e0bd20ec2f2ea8043896373ef992"
	I1101 10:52:57.186478  501404 cri.go:89] found id: "1d6ce9e953a8b3c836603bef290e36c2eae37f5508055cd9ebe57279220b4715"
	I1101 10:52:57.186481  501404 cri.go:89] found id: ""
	I1101 10:52:57.186542  501404 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:52:57.214159  501404 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:57Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:52:57.214270  501404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:52:57.242183  501404 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:52:57.242207  501404 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:52:57.242298  501404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:52:57.261397  501404 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:52:57.262030  501404 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-548708" does not appear in /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:57.262310  501404 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-292445/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-548708" cluster setting kubeconfig missing "no-preload-548708" context setting]
	I1101 10:52:57.262896  501404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:57.264646  501404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:52:57.282271  501404 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:52:57.282316  501404 kubeadm.go:602] duration metric: took 40.091285ms to restartPrimaryControlPlane
	I1101 10:52:57.282327  501404 kubeadm.go:403] duration metric: took 167.091414ms to StartCluster
	I1101 10:52:57.282345  501404 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:57.282420  501404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:57.283460  501404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:57.283717  501404 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:52:57.284131  501404 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:57.284098  501404 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:52:57.284177  501404 addons.go:70] Setting storage-provisioner=true in profile "no-preload-548708"
	I1101 10:52:57.284189  501404 addons.go:70] Setting dashboard=true in profile "no-preload-548708"
	I1101 10:52:57.284197  501404 addons.go:70] Setting default-storageclass=true in profile "no-preload-548708"
	I1101 10:52:57.284201  501404 addons.go:239] Setting addon dashboard=true in "no-preload-548708"
	W1101 10:52:57.284208  501404 addons.go:248] addon dashboard should already be in state true
	I1101 10:52:57.284208  501404 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-548708"
	I1101 10:52:57.284273  501404 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:52:57.284512  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:57.284890  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:57.293382  501404 out.go:179] * Verifying Kubernetes components...
	I1101 10:52:57.284191  501404 addons.go:239] Setting addon storage-provisioner=true in "no-preload-548708"
	W1101 10:52:57.293646  501404 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:52:57.293706  501404 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:52:57.294268  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:57.296717  501404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:57.320816  501404 addons.go:239] Setting addon default-storageclass=true in "no-preload-548708"
	W1101 10:52:57.320837  501404 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:52:57.320860  501404 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:52:57.321397  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:57.357944  501404 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:52:57.360887  501404 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:52:57.366173  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:52:57.366200  501404 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:52:57.366274  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:57.374306  501404 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:52:57.601355  500273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.202031562s)
	I1101 10:52:57.601410  500273 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.171412833s)
	I1101 10:52:57.601443  500273 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:52:57.601500  500273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:52:57.601569  500273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.135223081s)
	I1101 10:52:58.018491  500273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.15629352s)
	I1101 10:52:58.018712  500273 api_server.go:72] duration metric: took 9.980900463s to wait for apiserver process to appear ...
	I1101 10:52:58.018766  500273 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:52:58.018801  500273 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:52:58.022628  500273 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-196911 addons enable metrics-server
	
	I1101 10:52:58.025658  500273 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 10:52:58.028631  500273 addons.go:515] duration metric: took 9.990455194s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 10:52:58.042222  500273 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:52:58.042250  500273 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:52:58.519673  500273 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:52:58.533389  500273 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:52:58.535246  500273 api_server.go:141] control plane version: v1.34.1
	I1101 10:52:58.535316  500273 api_server.go:131] duration metric: took 516.527897ms to wait for apiserver health ...
	I1101 10:52:58.535341  500273 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:52:58.557526  500273 system_pods.go:59] 8 kube-system pods found
	I1101 10:52:58.557619  500273 system_pods.go:61] "coredns-66bc5c9577-nrbdx" [40aa9ab2-b153-44dd-8fd8-67a26277b297] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:52:58.557655  500273 system_pods.go:61] "etcd-newest-cni-196911" [42f247b8-6ece-44a9-93cd-beb285466fe5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:52:58.557680  500273 system_pods.go:61] "kindnet-mlxls" [0d6d41c4-8fef-48d4-ab11-4f2c76c278e6] Running
	I1101 10:52:58.557710  500273 system_pods.go:61] "kube-apiserver-newest-cni-196911" [140c210e-a29a-4e71-932d-8133da9b074f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:52:58.557744  500273 system_pods.go:61] "kube-controller-manager-newest-cni-196911" [538b5879-e897-4ab3-950e-1317c7dad7e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:52:58.557773  500273 system_pods.go:61] "kube-proxy-2psfb" [fc92af6a-7726-496b-8f2c-e315e3065bf2] Running
	I1101 10:52:58.557797  500273 system_pods.go:61] "kube-scheduler-newest-cni-196911" [e5db2498-da4d-4ca5-b16a-3f78ee27f34c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:52:58.557830  500273 system_pods.go:61] "storage-provisioner" [3987f872-17e6-466b-b60e-1e931276699e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:52:58.557856  500273 system_pods.go:74] duration metric: took 22.495158ms to wait for pod list to return data ...
	I1101 10:52:58.557881  500273 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:52:58.564486  500273 default_sa.go:45] found service account: "default"
	I1101 10:52:58.564554  500273 default_sa.go:55] duration metric: took 6.644ms for default service account to be created ...
	I1101 10:52:58.564583  500273 kubeadm.go:587] duration metric: took 10.526771155s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:52:58.564614  500273 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:52:58.569451  500273 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:52:58.569533  500273 node_conditions.go:123] node cpu capacity is 2
	I1101 10:52:58.569563  500273 node_conditions.go:105] duration metric: took 4.918224ms to run NodePressure ...
	I1101 10:52:58.569592  500273 start.go:242] waiting for startup goroutines ...
	I1101 10:52:58.569624  500273 start.go:247] waiting for cluster config update ...
	I1101 10:52:58.569650  500273 start.go:256] writing updated cluster config ...
	I1101 10:52:58.569959  500273 ssh_runner.go:195] Run: rm -f paused
	I1101 10:52:58.667211  500273 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:52:58.670858  500273 out.go:179] * Done! kubectl is now configured to use "newest-cni-196911" cluster and "default" namespace by default
	I1101 10:52:57.378797  501404 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:57.378819  501404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:52:57.378898  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:57.381220  501404 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:57.381243  501404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:52:57.381304  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:57.428762  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:57.431213  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:57.445049  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:57.853002  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:52:57.853029  501404 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:52:57.950303  501404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:57.978681  501404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:57.986714  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:52:57.986736  501404 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:52:58.060959  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:52:58.061032  501404 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:52:58.066642  501404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:58.072191  501404 node_ready.go:35] waiting up to 6m0s for node "no-preload-548708" to be "Ready" ...
	I1101 10:52:58.148098  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:52:58.148118  501404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:52:58.252137  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:52:58.252212  501404 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:52:58.323175  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:52:58.323249  501404 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:52:58.443259  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:52:58.443283  501404 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:52:58.509163  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:52:58.509189  501404 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:52:58.528633  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:52:58.528658  501404 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:52:58.578067  501404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.376731686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.381213205Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-2psfb/POD" id=ca49d9d8-9149-4a1c-a187-035fa650f138 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.381856845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.404696984Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ca49d9d8-9149-4a1c-a187-035fa650f138 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.4056432Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7f748768-9cc7-4919-b857-dcfd5f72d5a2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.427949663Z" level=info msg="Ran pod sandbox 2777922d5fdb000f43008b2ccd28e42f5a612ca78434b5162ea2aca82190fbe1 with infra container: kube-system/kindnet-mlxls/POD" id=7f748768-9cc7-4919-b857-dcfd5f72d5a2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.435485971Z" level=info msg="Ran pod sandbox 78cd2bfb4ad676c73a091af3adf3ff15477bf611471b48d879d50fd76c328371 with infra container: kube-system/kube-proxy-2psfb/POD" id=ca49d9d8-9149-4a1c-a187-035fa650f138 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.444201416Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e5c290df-b89b-4d44-b2c7-187eef5b60f0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.444994055Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=82b83ab6-d63c-45a0-a370-cd7550d58f14 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.466992684Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f6e3b379-134c-4c23-a6ec-cffb471a9f48 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.467028557Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=70ec4b2b-7e8b-440c-b3a3-a657f3eac0df name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.468586003Z" level=info msg="Creating container: kube-system/kube-proxy-2psfb/kube-proxy" id=a721d30c-d3a6-4dd0-9030-32190dd6d028 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.468703863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.468729644Z" level=info msg="Creating container: kube-system/kindnet-mlxls/kindnet-cni" id=906debd3-acca-4be9-b29b-e0791f53a673 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.468809292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.529518737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.53699532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.577820133Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.578795585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.682976818Z" level=info msg="Created container a88721b15f260f8a89963171573d649f82fb4cc278159302fc14d86567df47ff: kube-system/kindnet-mlxls/kindnet-cni" id=906debd3-acca-4be9-b29b-e0791f53a673 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.686730839Z" level=info msg="Starting container: a88721b15f260f8a89963171573d649f82fb4cc278159302fc14d86567df47ff" id=88830b63-e260-4244-a57c-c9b28c115df7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.689923738Z" level=info msg="Started container" PID=1058 containerID=a88721b15f260f8a89963171573d649f82fb4cc278159302fc14d86567df47ff description=kube-system/kindnet-mlxls/kindnet-cni id=88830b63-e260-4244-a57c-c9b28c115df7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2777922d5fdb000f43008b2ccd28e42f5a612ca78434b5162ea2aca82190fbe1
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.959213992Z" level=info msg="Created container fdfa04af4c179f64d30bb2228675311ae707dab7e617e03be48fff445c30bbf9: kube-system/kube-proxy-2psfb/kube-proxy" id=a721d30c-d3a6-4dd0-9030-32190dd6d028 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.959969035Z" level=info msg="Starting container: fdfa04af4c179f64d30bb2228675311ae707dab7e617e03be48fff445c30bbf9" id=9632c096-46a2-47c9-9d8e-f492949d2a65 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.970253446Z" level=info msg="Started container" PID=1059 containerID=fdfa04af4c179f64d30bb2228675311ae707dab7e617e03be48fff445c30bbf9 description=kube-system/kube-proxy-2psfb/kube-proxy id=9632c096-46a2-47c9-9d8e-f492949d2a65 name=/runtime.v1.RuntimeService/StartContainer sandboxID=78cd2bfb4ad676c73a091af3adf3ff15477bf611471b48d879d50fd76c328371
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a88721b15f260       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   2777922d5fdb0       kindnet-mlxls                               kube-system
	fdfa04af4c179       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   78cd2bfb4ad67       kube-proxy-2psfb                            kube-system
	292d0cbb536ac       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      1                   cc7f336b19733       etcd-newest-cni-196911                      kube-system
	43cff061f63df       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   1                   75a287967e9b2       kube-controller-manager-newest-cni-196911   kube-system
	4c9e83d09d804       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            1                   e82d6ae2c3a52       kube-apiserver-newest-cni-196911            kube-system
	6ee79706bb2c3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            1                   1c01607ecb3b8       kube-scheduler-newest-cni-196911            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-196911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-196911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=newest-cni-196911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_52_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:52:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-196911
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:52:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:52:54 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:52:54 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:52:54 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:52:54 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-196911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1a13745d-d4b0-4a25-a286-6bb43ff747ac
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-196911                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-mlxls                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-196911             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-196911    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-2psfb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-196911             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   NodeHasSufficientPID     36s                kubelet          Node newest-cni-196911 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node newest-cni-196911 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node newest-cni-196911 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           32s                node-controller  Node newest-cni-196911 event: Registered Node newest-cni-196911 in Controller
	  Normal   Starting                 18s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node newest-cni-196911 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node newest-cni-196911 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s (x8 over 18s)  kubelet          Node newest-cni-196911 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-196911 event: Registered Node newest-cni-196911 in Controller
	
	
	==> dmesg <==
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[ +42.559605] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:50] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:51] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:52] overlayfs: idmapped layers are currently not supported
	[ +26.480177] overlayfs: idmapped layers are currently not supported
	[  +9.079378] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [292d0cbb536acc09cd84b96d1b822feb61d97070a176a9932123014a40ee60cb] <==
	{"level":"warn","ts":"2025-11-01T10:52:51.071622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.133080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.172278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.218972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.260713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.302849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.391992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.425362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.465796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.516968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.559828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.583947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.605603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.621401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.667928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.685398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.717930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.740953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.762987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.826120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.905416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.943085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.997641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:52.042465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:52.222554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43404","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:53:04 up  2:35,  0 user,  load average: 6.87, 4.47, 3.27
	Linux newest-cni-196911 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a88721b15f260f8a89963171573d649f82fb4cc278159302fc14d86567df47ff] <==
	I1101 10:52:55.931367       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:52:55.932717       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:52:55.939612       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:52:55.939637       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:52:55.939652       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:52:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:52:56.218276       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:52:56.218361       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:52:56.218395       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:52:56.219608       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [4c9e83d09d804cacddc0212f96f7746196a7c47d338ed0e9519993cbb75d1314] <==
	I1101 10:52:54.374170       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:52:54.375860       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:52:54.395929       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:52:54.409211       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:52:54.478722       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:52:54.478786       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:52:54.674328       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:52:54.958531       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:52:55.000071       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:52:55.000440       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:52:56.552729       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:52:56.836911       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:52:57.125530       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:52:57.197318       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:52:57.913624       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.140.221"}
	I1101 10:52:58.007248       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.159.160"}
	E1101 10:53:00.643703       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1101 10:53:00.650459       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-11-01T10:53:00.651527Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001995a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1101 10:53:00.651632       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 1.094763ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1101 10:53:00.651818       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1101 10:53:00.653429       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="9.808386ms" method="PATCH" path="/api/v1/namespaces/kube-system/pods/kube-controller-manager-newest-cni-196911/status" result=null
	I1101 10:53:00.812486       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:53:00.839989       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:53:00.859396       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [43cff061f63df9268ac8b9a55804a126d15f4a912d0b682729bc41fab87e54d4] <==
	I1101 10:53:00.509653       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:53:00.538598       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:53:00.557013       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:53:00.557130       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:53:00.571246       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:53:00.559864       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:53:00.559880       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:53:00.578269       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:53:00.559921       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:53:00.571394       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:53:00.557548       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:53:00.589805       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:53:00.591067       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:53:00.578459       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:53:00.578473       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:53:00.606955       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:53:00.571364       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:53:00.607280       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:53:00.571537       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:53:00.571549       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:53:00.608314       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:53:00.663519       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:53:00.663591       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:53:00.663623       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:53:00.765109       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [fdfa04af4c179f64d30bb2228675311ae707dab7e617e03be48fff445c30bbf9] <==
	I1101 10:52:58.361282       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:52:58.518093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:52:58.953850       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:52:58.953970       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:52:58.981082       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:53:00.662480       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:53:00.662613       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:53:01.142127       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:53:01.142547       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:53:01.142761       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:53:01.144217       1 config.go:200] "Starting service config controller"
	I1101 10:53:01.145018       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:53:01.145075       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:53:01.145105       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:53:01.145142       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:53:01.145170       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:53:01.145857       1 config.go:309] "Starting node config controller"
	I1101 10:53:01.151702       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:53:01.151746       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:53:01.245110       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:53:01.250825       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:53:01.250845       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6ee79706bb2c3b2a369e20eed26ccdb5985aa7c70ae1cd34024086e323278927] <==
	I1101 10:52:55.187486       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:53:02.298331       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:53:02.298374       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:53:02.305959       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:53:02.306234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:53:02.306408       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:53:02.306208       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:53:02.306487       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:53:02.306246       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:53:02.330781       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:53:02.306260       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:53:02.418260       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:53:02.419383       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:53:02.434920       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:52:51 newest-cni-196911 kubelet[730]: E1101 10:52:51.145725     730 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-196911\" not found" node="newest-cni-196911"
	Nov 01 10:52:53 newest-cni-196911 kubelet[730]: E1101 10:52:53.598188     730 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-196911\" not found" node="newest-cni-196911"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.038395     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-196911"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.693038     730 apiserver.go:52] "Watching apiserver"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.833497     730 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.862021     730 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-196911"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.862121     730 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-196911"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.862150     730 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873436     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d6d41c4-8fef-48d4-ab11-4f2c76c278e6-xtables-lock\") pod \"kindnet-mlxls\" (UID: \"0d6d41c4-8fef-48d4-ab11-4f2c76c278e6\") " pod="kube-system/kindnet-mlxls"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873493     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc92af6a-7726-496b-8f2c-e315e3065bf2-lib-modules\") pod \"kube-proxy-2psfb\" (UID: \"fc92af6a-7726-496b-8f2c-e315e3065bf2\") " pod="kube-system/kube-proxy-2psfb"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873534     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc92af6a-7726-496b-8f2c-e315e3065bf2-xtables-lock\") pod \"kube-proxy-2psfb\" (UID: \"fc92af6a-7726-496b-8f2c-e315e3065bf2\") " pod="kube-system/kube-proxy-2psfb"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873554     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0d6d41c4-8fef-48d4-ab11-4f2c76c278e6-cni-cfg\") pod \"kindnet-mlxls\" (UID: \"0d6d41c4-8fef-48d4-ab11-4f2c76c278e6\") " pod="kube-system/kindnet-mlxls"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873574     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d6d41c4-8fef-48d4-ab11-4f2c76c278e6-lib-modules\") pod \"kindnet-mlxls\" (UID: \"0d6d41c4-8fef-48d4-ab11-4f2c76c278e6\") " pod="kube-system/kindnet-mlxls"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873858     730 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: E1101 10:52:54.991686     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-196911\" already exists" pod="kube-system/kube-scheduler-newest-cni-196911"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.991727     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-196911"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: I1101 10:52:55.197470     730 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: E1101 10:52:55.241242     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-196911\" already exists" pod="kube-system/etcd-newest-cni-196911"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: I1101 10:52:55.241279     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-196911"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: E1101 10:52:55.310446     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-196911\" already exists" pod="kube-system/kube-apiserver-newest-cni-196911"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: I1101 10:52:55.310483     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-196911"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: E1101 10:52:55.442820     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-196911\" already exists" pod="kube-system/kube-controller-manager-newest-cni-196911"
	Nov 01 10:53:00 newest-cni-196911 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:53:00 newest-cni-196911 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:53:00 newest-cni-196911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-196911 -n newest-cni-196911
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-196911 -n newest-cni-196911: exit status 2 (540.186847ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-196911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-nrbdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vlwr4 kubernetes-dashboard-855c9754f9-mfggn
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-196911 describe pod coredns-66bc5c9577-nrbdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vlwr4 kubernetes-dashboard-855c9754f9-mfggn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-196911 describe pod coredns-66bc5c9577-nrbdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vlwr4 kubernetes-dashboard-855c9754f9-mfggn: exit status 1 (138.656163ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-nrbdx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-vlwr4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-mfggn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-196911 describe pod coredns-66bc5c9577-nrbdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vlwr4 kubernetes-dashboard-855c9754f9-mfggn: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-196911
helpers_test.go:243: (dbg) docker inspect newest-cni-196911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8",
	        "Created": "2025-11-01T10:51:57.909472706Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500402,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:52:39.650076766Z",
	            "FinishedAt": "2025-11-01T10:52:38.688875583Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/hostname",
	        "HostsPath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/hosts",
	        "LogPath": "/var/lib/docker/containers/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8/017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8-json.log",
	        "Name": "/newest-cni-196911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-196911:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-196911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "017ea6857675a3158cfc8b26266146087f4f5ce333c783dfd29fe903107b1de8",
	                "LowerDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f0bee95e2bb8bd3e7aa92a9c9c8779e03e6f0a04880a9c396ab9266aca420227/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-196911",
	                "Source": "/var/lib/docker/volumes/newest-cni-196911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-196911",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-196911",
	                "name.minikube.sigs.k8s.io": "newest-cni-196911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32e5dca3a74dd0b1be2be7d90351d66259d21d75fe22d4edca6510aaaf1c4188",
	            "SandboxKey": "/var/run/docker/netns/32e5dca3a74d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-196911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:14:3a:09:09:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e268685f915b61a03d0e4cd44fdcaaee41eecaa2cf061bd3b1cfc552fbc84998",
	                    "EndpointID": "06c4792348b03b7453cb4cc8241149eca6eae41534e470f189acadf178cb50d6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-196911",
	                        "017ea6857675"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-196911 -n newest-cni-196911
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-196911 -n newest-cni-196911: exit status 2 (477.193294ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-196911 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-196911 logs -n 25: (1.693395644s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-499088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ stop    │ -p embed-certs-499088 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ addons  │ enable dashboard -p embed-certs-499088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ start   │ -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:51 UTC │
	│ image   │ default-k8s-diff-port-014050 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │ 01 Nov 25 10:50 UTC │
	│ pause   │ -p default-k8s-diff-port-014050 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p disable-driver-mounts-514829                                                                                                                                                                                                               │ disable-driver-mounts-514829 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:52 UTC │
	│ image   │ embed-certs-499088 image list --format=json                                                                                                                                                                                                   │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ pause   │ -p embed-certs-499088 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-548708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ stop    │ -p no-preload-548708 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable metrics-server -p newest-cni-196911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ stop    │ -p newest-cni-196911 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable dashboard -p newest-cni-196911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ start   │ -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable dashboard -p no-preload-548708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ image   │ newest-cni-196911 image list --format=json                                                                                                                                                                                                    │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ pause   │ -p newest-cni-196911 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:52:46
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:52:46.080449  501404 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:52:46.080568  501404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:46.080573  501404 out.go:374] Setting ErrFile to fd 2...
	I1101 10:52:46.080578  501404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:46.081885  501404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:52:46.082317  501404 out.go:368] Setting JSON to false
	I1101 10:52:46.083206  501404 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9318,"bootTime":1761985048,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:52:46.083291  501404 start.go:143] virtualization:  
	I1101 10:52:46.086775  501404 out.go:179] * [no-preload-548708] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:52:46.090668  501404 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:52:46.090833  501404 notify.go:221] Checking for updates...
	I1101 10:52:46.097246  501404 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:52:46.100088  501404 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:46.102854  501404 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:52:46.106548  501404 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:52:46.109447  501404 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:52:46.112823  501404 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:46.113447  501404 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:52:46.153891  501404 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:52:46.154028  501404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:52:46.245708  501404 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:52:46.234362529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:52:46.245844  501404 docker.go:319] overlay module found
	I1101 10:52:46.249158  501404 out.go:179] * Using the docker driver based on existing profile
	I1101 10:52:46.252304  501404 start.go:309] selected driver: docker
	I1101 10:52:46.252326  501404 start.go:930] validating driver "docker" against &{Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:52:46.252468  501404 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:52:46.253626  501404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:52:46.351692  501404 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:52:46.337682636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:52:46.352044  501404 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:52:46.352072  501404 cni.go:84] Creating CNI manager for ""
	I1101 10:52:46.352127  501404 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:52:46.352169  501404 start.go:353] cluster config:
	{Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:52:46.355586  501404 out.go:179] * Starting "no-preload-548708" primary control-plane node in "no-preload-548708" cluster
	I1101 10:52:46.358537  501404 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:52:46.361517  501404 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:52:46.364444  501404 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:52:46.364627  501404 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/config.json ...
	I1101 10:52:46.365014  501404 cache.go:107] acquiring lock: {Name:mk87c12063bfe6477c1b6ed8fc827cc60e9ca811 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365097  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 10:52:46.365105  501404 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.659µs
	I1101 10:52:46.365113  501404 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 10:52:46.365125  501404 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:52:46.365259  501404 cache.go:107] acquiring lock: {Name:mk4c1242d2913ae89c6c2d48e391247cfb4b6c0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365322  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 10:52:46.365330  501404 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 76.301µs
	I1101 10:52:46.365338  501404 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 10:52:46.365349  501404 cache.go:107] acquiring lock: {Name:mk98f5306fba9c79ff24fb30add0aac4b2ea9d11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365391  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 10:52:46.365398  501404 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 50.2µs
	I1101 10:52:46.365404  501404 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 10:52:46.365415  501404 cache.go:107] acquiring lock: {Name:mkd94f63e239c14a2fc215ef4549c0b3008ae371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365442  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 10:52:46.365447  501404 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 33.51µs
	I1101 10:52:46.365454  501404 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 10:52:46.365469  501404 cache.go:107] acquiring lock: {Name:mke31e546546420a97a22fb575f397eaa8d20c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365495  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 10:52:46.365500  501404 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 32.649µs
	I1101 10:52:46.365506  501404 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 10:52:46.365516  501404 cache.go:107] acquiring lock: {Name:mk3196340dda3f6ca3036b488f880ffd822482f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365541  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 10:52:46.365546  501404 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.278µs
	I1101 10:52:46.365552  501404 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 10:52:46.365561  501404 cache.go:107] acquiring lock: {Name:mkf516acd2e5d0c72111e5669f8226bc99c3850c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365586  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1101 10:52:46.365591  501404 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.058µs
	I1101 10:52:46.365601  501404 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 10:52:46.365611  501404 cache.go:107] acquiring lock: {Name:mk0a26100d6da9ffb6e62c9df95140af96aec6f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.365636  501404 cache.go:115] /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 10:52:46.365666  501404 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 54.934µs
	I1101 10:52:46.365673  501404 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 10:52:46.365680  501404 cache.go:87] Successfully saved all images to host disk.
	I1101 10:52:46.398295  501404 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:52:46.398314  501404 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:52:46.398326  501404 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:52:46.398357  501404 start.go:360] acquireMachinesLock for no-preload-548708: {Name:mk9ab5039a75ce95aea667171fcdfabc6fc7786c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:52:46.398408  501404 start.go:364] duration metric: took 35.357µs to acquireMachinesLock for "no-preload-548708"
	I1101 10:52:46.398428  501404 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:52:46.398433  501404 fix.go:54] fixHost starting: 
	I1101 10:52:46.398708  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:46.415637  501404 fix.go:112] recreateIfNeeded on no-preload-548708: state=Stopped err=<nil>
	W1101 10:52:46.415682  501404 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:52:46.033665  500273 kubeadm.go:884] updating cluster {Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:52:46.033810  500273 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:52:46.033890  500273 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:52:46.073361  500273 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:52:46.073383  500273 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:52:46.073450  500273 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:52:46.104506  500273 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:52:46.104525  500273 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:52:46.104534  500273 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:52:46.104632  500273 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-196911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:52:46.104718  500273 ssh_runner.go:195] Run: crio config
	I1101 10:52:46.174117  500273 cni.go:84] Creating CNI manager for ""
	I1101 10:52:46.174201  500273 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:52:46.174234  500273 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:52:46.174299  500273 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-196911 NodeName:newest-cni-196911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:52:46.174486  500273 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-196911"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:52:46.174610  500273 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:52:46.184423  500273 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:52:46.184496  500273 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:52:46.193320  500273 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:52:46.208209  500273 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:52:46.223083  500273 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 10:52:46.238186  500273 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:52:46.243599  500273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:52:46.258029  500273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:46.416685  500273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:46.453035  500273 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911 for IP: 192.168.76.2
	I1101 10:52:46.453055  500273 certs.go:195] generating shared ca certs ...
	I1101 10:52:46.453072  500273 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:46.453272  500273 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:52:46.453341  500273 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:52:46.453349  500273 certs.go:257] generating profile certs ...
	I1101 10:52:46.453451  500273 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/client.key
	I1101 10:52:46.453530  500273 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key.415499af
	I1101 10:52:46.453571  500273 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key
	I1101 10:52:46.453683  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:52:46.453721  500273 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:52:46.453735  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:52:46.453763  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:52:46.453787  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:52:46.453808  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:52:46.453852  500273 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:52:46.456074  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:52:46.488110  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:52:46.518144  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:52:46.548126  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:52:46.604146  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:52:46.657284  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:52:46.704900  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:52:46.749966  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/newest-cni-196911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:52:46.801077  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:52:46.827858  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:52:46.874151  500273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:52:46.918968  500273 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:52:46.950919  500273 ssh_runner.go:195] Run: openssl version
	I1101 10:52:46.958806  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:52:46.967906  500273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:52:46.972294  500273 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:52:46.972379  500273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:52:47.022543  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:52:47.033861  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:52:47.048687  500273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:52:47.054179  500273 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:52:47.054243  500273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:52:47.107053  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:52:47.120506  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:52:47.132088  500273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:47.139954  500273 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:47.140046  500273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:47.193377  500273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:52:47.202504  500273 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:52:47.209688  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:52:47.271192  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:52:47.383716  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:52:47.480428  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:52:47.609906  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:52:47.815843  500273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:52:47.905942  500273 kubeadm.go:401] StartCluster: {Name:newest-cni-196911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-196911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:52:47.906044  500273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:52:47.906145  500273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:52:47.964304  500273 cri.go:89] found id: "292d0cbb536acc09cd84b96d1b822feb61d97070a176a9932123014a40ee60cb"
	I1101 10:52:47.964327  500273 cri.go:89] found id: "43cff061f63df9268ac8b9a55804a126d15f4a912d0b682729bc41fab87e54d4"
	I1101 10:52:47.964333  500273 cri.go:89] found id: "4c9e83d09d804cacddc0212f96f7746196a7c47d338ed0e9519993cbb75d1314"
	I1101 10:52:47.964337  500273 cri.go:89] found id: "6ee79706bb2c3b2a369e20eed26ccdb5985aa7c70ae1cd34024086e323278927"
	I1101 10:52:47.964340  500273 cri.go:89] found id: ""
	I1101 10:52:47.964414  500273 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:52:47.987966  500273 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:47Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:52:47.988108  500273 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:52:48.002378  500273 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:52:48.002457  500273 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:52:48.002550  500273 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:52:48.019575  500273 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:52:48.020156  500273 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-196911" does not appear in /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:48.020335  500273 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-292445/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-196911" cluster setting kubeconfig missing "newest-cni-196911" context setting]
	I1101 10:52:48.020706  500273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:48.022566  500273 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:52:48.036347  500273 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:52:48.036446  500273 kubeadm.go:602] duration metric: took 33.965493ms to restartPrimaryControlPlane
	I1101 10:52:48.036472  500273 kubeadm.go:403] duration metric: took 130.541192ms to StartCluster
	I1101 10:52:48.036502  500273 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:48.036610  500273 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:48.037441  500273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:48.037743  500273 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:52:48.038167  500273 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:52:48.038244  500273 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-196911"
	I1101 10:52:48.038258  500273 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-196911"
	W1101 10:52:48.038264  500273 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:52:48.038288  500273 host.go:66] Checking if "newest-cni-196911" exists ...
	I1101 10:52:48.038766  500273 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:48.039252  500273 config.go:182] Loaded profile config "newest-cni-196911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:48.039351  500273 addons.go:70] Setting dashboard=true in profile "newest-cni-196911"
	I1101 10:52:48.039394  500273 addons.go:239] Setting addon dashboard=true in "newest-cni-196911"
	W1101 10:52:48.039415  500273 addons.go:248] addon dashboard should already be in state true
	I1101 10:52:48.039468  500273 host.go:66] Checking if "newest-cni-196911" exists ...
	I1101 10:52:48.040025  500273 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:48.043766  500273 addons.go:70] Setting default-storageclass=true in profile "newest-cni-196911"
	I1101 10:52:48.044027  500273 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-196911"
	I1101 10:52:48.043952  500273 out.go:179] * Verifying Kubernetes components...
	I1101 10:52:48.049750  500273 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:48.051055  500273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:48.099762  500273 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:52:48.102601  500273 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:52:48.105515  500273 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:48.105539  500273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:52:48.105607  500273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:48.105773  500273 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:52:48.109178  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:52:48.109203  500273 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:52:48.109271  500273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:48.133911  500273 addons.go:239] Setting addon default-storageclass=true in "newest-cni-196911"
	W1101 10:52:48.133937  500273 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:52:48.133964  500273 host.go:66] Checking if "newest-cni-196911" exists ...
	I1101 10:52:48.134416  500273 cli_runner.go:164] Run: docker container inspect newest-cni-196911 --format={{.State.Status}}
	I1101 10:52:48.178692  500273 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:48.178715  500273 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:52:48.178780  500273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-196911
	I1101 10:52:48.179404  500273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:48.194202  500273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:48.218652  500273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/newest-cni-196911/id_rsa Username:docker}
	I1101 10:52:48.399293  500273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:48.429978  500273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:48.453318  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:52:48.453344  500273 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:52:48.466326  500273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:48.516831  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:52:48.516914  500273 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:52:48.639727  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:52:48.639754  500273 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:52:48.706194  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:52:48.706217  500273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:52:48.740643  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:52:48.740669  500273 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:52:48.770297  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:52:48.770325  500273 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:52:48.793949  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:52:48.793975  500273 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:52:48.816342  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:52:48.816370  500273 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:52:48.838786  500273 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:52:48.838812  500273 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:52:48.862148  500273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:52:46.424478  501404 out.go:252] * Restarting existing docker container for "no-preload-548708" ...
	I1101 10:52:46.424569  501404 cli_runner.go:164] Run: docker start no-preload-548708
	I1101 10:52:46.846955  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:46.879533  501404 kic.go:430] container "no-preload-548708" state is running.
	I1101 10:52:46.879949  501404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:52:46.906877  501404 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/config.json ...
	I1101 10:52:46.907112  501404 machine.go:94] provisionDockerMachine start ...
	I1101 10:52:46.907173  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:46.932490  501404 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:46.932809  501404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1101 10:52:46.932825  501404 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:52:46.934045  501404 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:52:50.121004  501404 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-548708
	
	I1101 10:52:50.121045  501404 ubuntu.go:182] provisioning hostname "no-preload-548708"
	I1101 10:52:50.121149  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:50.154720  501404 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:50.155043  501404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1101 10:52:50.155062  501404 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-548708 && echo "no-preload-548708" | sudo tee /etc/hostname
	I1101 10:52:50.340814  501404 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-548708
	
	I1101 10:52:50.340959  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:50.362002  501404 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:50.362314  501404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1101 10:52:50.362332  501404 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-548708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-548708/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-548708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:52:50.533264  501404 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:52:50.533317  501404 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:52:50.533344  501404 ubuntu.go:190] setting up certificates
	I1101 10:52:50.533365  501404 provision.go:84] configureAuth start
	I1101 10:52:50.533437  501404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:52:50.564138  501404 provision.go:143] copyHostCerts
	I1101 10:52:50.564215  501404 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:52:50.564236  501404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:52:50.564312  501404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:52:50.564429  501404 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:52:50.564441  501404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:52:50.564469  501404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:52:50.564529  501404 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:52:50.564539  501404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:52:50.564563  501404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:52:50.564623  501404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.no-preload-548708 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-548708]
	I1101 10:52:51.496017  501404 provision.go:177] copyRemoteCerts
	I1101 10:52:51.496088  501404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:52:51.496145  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:51.516596  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:51.641650  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:52:51.677386  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:52:51.715747  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:52:51.755457  501404 provision.go:87] duration metric: took 1.222062982s to configureAuth
	I1101 10:52:51.755492  501404 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:52:51.755725  501404 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:51.755857  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:51.785158  501404 main.go:143] libmachine: Using SSH client type: native
	I1101 10:52:51.785469  501404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1101 10:52:51.785497  501404 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:52:52.250822  501404 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:52:52.250886  501404 machine.go:97] duration metric: took 5.343764066s to provisionDockerMachine
	I1101 10:52:52.250912  501404 start.go:293] postStartSetup for "no-preload-548708" (driver="docker")
	I1101 10:52:52.250963  501404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:52:52.251083  501404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:52:52.251149  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:52.282516  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:52.403356  501404 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:52:52.407304  501404 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:52:52.407331  501404 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:52:52.407342  501404 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:52:52.407398  501404 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:52:52.407472  501404 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:52:52.407580  501404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:52:52.419042  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:52:52.450094  501404 start.go:296] duration metric: took 199.139474ms for postStartSetup
	I1101 10:52:52.450254  501404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:52:52.450320  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:52.478641  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:52.602405  501404 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:52:52.609422  501404 fix.go:56] duration metric: took 6.210981629s for fixHost
	I1101 10:52:52.609444  501404 start.go:83] releasing machines lock for "no-preload-548708", held for 6.211027512s
	I1101 10:52:52.609515  501404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-548708
	I1101 10:52:52.638687  501404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:52:52.638773  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:52.639006  501404 ssh_runner.go:195] Run: cat /version.json
	I1101 10:52:52.639141  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:52.679140  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:52.681487  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:52.916068  501404 ssh_runner.go:195] Run: systemctl --version
	I1101 10:52:52.925659  501404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:52:53.011729  501404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:52:53.025578  501404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:52:53.025696  501404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:52:53.035024  501404 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:52:53.035131  501404 start.go:496] detecting cgroup driver to use...
	I1101 10:52:53.035208  501404 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:52:53.035284  501404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:52:53.064494  501404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:52:53.085091  501404 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:52:53.085203  501404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:52:53.120410  501404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:52:53.136706  501404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:52:53.349004  501404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:52:53.554859  501404 docker.go:234] disabling docker service ...
	I1101 10:52:53.554982  501404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:52:53.577647  501404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:52:53.590836  501404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:52:53.795957  501404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:52:53.988982  501404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:52:54.009982  501404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:52:54.036073  501404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:52:54.036258  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.047559  501404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:52:54.047716  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.058136  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.071584  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.099557  501404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:52:54.113029  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.133739  501404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.159843  501404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:52:54.174227  501404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:52:54.186715  501404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:52:54.196903  501404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:54.400680  501404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:52:54.602200  501404 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:52:54.602296  501404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:52:54.610872  501404 start.go:564] Will wait 60s for crictl version
	I1101 10:52:54.610953  501404 ssh_runner.go:195] Run: which crictl
	I1101 10:52:54.617642  501404 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:52:54.672531  501404 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:52:54.672650  501404 ssh_runner.go:195] Run: crio --version
	I1101 10:52:54.720721  501404 ssh_runner.go:195] Run: crio --version
	I1101 10:52:54.774708  501404 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:52:54.777697  501404 cli_runner.go:164] Run: docker network inspect no-preload-548708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:52:54.812213  501404 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:52:54.816502  501404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:52:54.827270  501404 kubeadm.go:884] updating cluster {Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:52:54.827382  501404 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:52:54.827439  501404 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:52:54.904843  501404 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:52:54.904863  501404 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:52:54.904878  501404 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:52:54.905027  501404 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-548708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:52:54.905103  501404 ssh_runner.go:195] Run: crio config
	I1101 10:52:55.023635  501404 cni.go:84] Creating CNI manager for ""
	I1101 10:52:55.023724  501404 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:52:55.023762  501404 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:52:55.023820  501404 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-548708 NodeName:no-preload-548708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:52:55.024013  501404 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-548708"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:52:55.024170  501404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:52:55.034600  501404 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:52:55.034725  501404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:52:55.044186  501404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:52:55.082410  501404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:52:55.106400  501404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 10:52:55.133448  501404 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:52:55.145401  501404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:52:55.164706  501404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:55.420091  501404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:55.470560  501404 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708 for IP: 192.168.85.2
	I1101 10:52:55.470627  501404 certs.go:195] generating shared ca certs ...
	I1101 10:52:55.470659  501404 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:55.470849  501404 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:52:55.470941  501404 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:52:55.470977  501404 certs.go:257] generating profile certs ...
	I1101 10:52:55.471128  501404 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.key
	I1101 10:52:55.471235  501404 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key.71cdcdd3
	I1101 10:52:55.471304  501404 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key
	I1101 10:52:55.471448  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:52:55.471503  501404 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:52:55.471528  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:52:55.471598  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:52:55.471650  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:52:55.471730  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:52:55.471812  501404 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:52:55.472461  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:52:55.516276  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:52:55.573970  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:52:55.612983  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:52:55.661870  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:52:55.741455  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:52:55.775577  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:52:55.831644  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:52:55.869297  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:52:55.911783  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:52:55.948406  501404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:52:55.981093  501404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:52:56.002925  501404 ssh_runner.go:195] Run: openssl version
	I1101 10:52:56.014055  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:52:56.037412  501404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:56.042414  501404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:56.042501  501404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:52:56.109256  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:52:56.127857  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:52:56.141543  501404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:52:56.149809  501404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:52:56.149898  501404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:52:56.219839  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:52:56.235064  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:52:56.250356  501404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:52:56.258587  501404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:52:56.258670  501404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:52:56.322847  501404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:52:56.362189  501404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:52:56.377263  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:52:56.490293  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:52:56.612855  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:52:56.802744  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:52:56.903409  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:52:57.028667  501404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:52:57.115246  501404 kubeadm.go:401] StartCluster: {Name:no-preload-548708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-548708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:52:57.115348  501404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:52:57.115435  501404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:52:57.186444  501404 cri.go:89] found id: "21b6a3d81852a5fbef2e31f92ee373c1322e58d33d0a4c6198b4f9654e688b41"
	I1101 10:52:57.186468  501404 cri.go:89] found id: "4d7c8dba98a1808a309fd3d7927f59223183ac53462318916d991ce724a3d765"
	I1101 10:52:57.186473  501404 cri.go:89] found id: "f5f4bd6b7426cda5e69e50ee4f6e6167b783e0bd20ec2f2ea8043896373ef992"
	I1101 10:52:57.186478  501404 cri.go:89] found id: "1d6ce9e953a8b3c836603bef290e36c2eae37f5508055cd9ebe57279220b4715"
	I1101 10:52:57.186481  501404 cri.go:89] found id: ""
	I1101 10:52:57.186542  501404 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:52:57.214159  501404 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:57Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:52:57.214270  501404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:52:57.242183  501404 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:52:57.242207  501404 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:52:57.242298  501404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:52:57.261397  501404 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:52:57.262030  501404 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-548708" does not appear in /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:57.262310  501404 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-292445/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-548708" cluster setting kubeconfig missing "no-preload-548708" context setting]
	I1101 10:52:57.262896  501404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:57.264646  501404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:52:57.282271  501404 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:52:57.282316  501404 kubeadm.go:602] duration metric: took 40.091285ms to restartPrimaryControlPlane
	I1101 10:52:57.282327  501404 kubeadm.go:403] duration metric: took 167.091414ms to StartCluster
	I1101 10:52:57.282345  501404 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:57.282420  501404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:52:57.283460  501404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:52:57.283717  501404 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:52:57.284131  501404 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:57.284098  501404 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:52:57.284177  501404 addons.go:70] Setting storage-provisioner=true in profile "no-preload-548708"
	I1101 10:52:57.284189  501404 addons.go:70] Setting dashboard=true in profile "no-preload-548708"
	I1101 10:52:57.284197  501404 addons.go:70] Setting default-storageclass=true in profile "no-preload-548708"
	I1101 10:52:57.284201  501404 addons.go:239] Setting addon dashboard=true in "no-preload-548708"
	W1101 10:52:57.284208  501404 addons.go:248] addon dashboard should already be in state true
	I1101 10:52:57.284208  501404 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-548708"
	I1101 10:52:57.284273  501404 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:52:57.284512  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:57.284890  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:57.293382  501404 out.go:179] * Verifying Kubernetes components...
	I1101 10:52:57.284191  501404 addons.go:239] Setting addon storage-provisioner=true in "no-preload-548708"
	W1101 10:52:57.293646  501404 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:52:57.293706  501404 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:52:57.294268  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:57.296717  501404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:52:57.320816  501404 addons.go:239] Setting addon default-storageclass=true in "no-preload-548708"
	W1101 10:52:57.320837  501404 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:52:57.320860  501404 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:52:57.321397  501404 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:52:57.357944  501404 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:52:57.360887  501404 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:52:57.366173  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:52:57.366200  501404 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:52:57.366274  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:57.374306  501404 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:52:57.601355  500273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.202031562s)
	I1101 10:52:57.601410  500273 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.171412833s)
	I1101 10:52:57.601443  500273 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:52:57.601500  500273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:52:57.601569  500273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.135223081s)
	I1101 10:52:58.018491  500273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.15629352s)
	I1101 10:52:58.018712  500273 api_server.go:72] duration metric: took 9.980900463s to wait for apiserver process to appear ...
	I1101 10:52:58.018766  500273 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:52:58.018801  500273 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:52:58.022628  500273 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-196911 addons enable metrics-server
	
	I1101 10:52:58.025658  500273 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 10:52:58.028631  500273 addons.go:515] duration metric: took 9.990455194s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 10:52:58.042222  500273 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:52:58.042250  500273 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:52:58.519673  500273 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:52:58.533389  500273 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:52:58.535246  500273 api_server.go:141] control plane version: v1.34.1
	I1101 10:52:58.535316  500273 api_server.go:131] duration metric: took 516.527897ms to wait for apiserver health ...
	I1101 10:52:58.535341  500273 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:52:58.557526  500273 system_pods.go:59] 8 kube-system pods found
	I1101 10:52:58.557619  500273 system_pods.go:61] "coredns-66bc5c9577-nrbdx" [40aa9ab2-b153-44dd-8fd8-67a26277b297] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:52:58.557655  500273 system_pods.go:61] "etcd-newest-cni-196911" [42f247b8-6ece-44a9-93cd-beb285466fe5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:52:58.557680  500273 system_pods.go:61] "kindnet-mlxls" [0d6d41c4-8fef-48d4-ab11-4f2c76c278e6] Running
	I1101 10:52:58.557710  500273 system_pods.go:61] "kube-apiserver-newest-cni-196911" [140c210e-a29a-4e71-932d-8133da9b074f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:52:58.557744  500273 system_pods.go:61] "kube-controller-manager-newest-cni-196911" [538b5879-e897-4ab3-950e-1317c7dad7e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:52:58.557773  500273 system_pods.go:61] "kube-proxy-2psfb" [fc92af6a-7726-496b-8f2c-e315e3065bf2] Running
	I1101 10:52:58.557797  500273 system_pods.go:61] "kube-scheduler-newest-cni-196911" [e5db2498-da4d-4ca5-b16a-3f78ee27f34c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:52:58.557830  500273 system_pods.go:61] "storage-provisioner" [3987f872-17e6-466b-b60e-1e931276699e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:52:58.557856  500273 system_pods.go:74] duration metric: took 22.495158ms to wait for pod list to return data ...
	I1101 10:52:58.557881  500273 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:52:58.564486  500273 default_sa.go:45] found service account: "default"
	I1101 10:52:58.564554  500273 default_sa.go:55] duration metric: took 6.644ms for default service account to be created ...
	I1101 10:52:58.564583  500273 kubeadm.go:587] duration metric: took 10.526771155s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:52:58.564614  500273 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:52:58.569451  500273 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:52:58.569533  500273 node_conditions.go:123] node cpu capacity is 2
	I1101 10:52:58.569563  500273 node_conditions.go:105] duration metric: took 4.918224ms to run NodePressure ...
	I1101 10:52:58.569592  500273 start.go:242] waiting for startup goroutines ...
	I1101 10:52:58.569624  500273 start.go:247] waiting for cluster config update ...
	I1101 10:52:58.569650  500273 start.go:256] writing updated cluster config ...
	I1101 10:52:58.569959  500273 ssh_runner.go:195] Run: rm -f paused
	I1101 10:52:58.667211  500273 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:52:58.670858  500273 out.go:179] * Done! kubectl is now configured to use "newest-cni-196911" cluster and "default" namespace by default
	I1101 10:52:57.378797  501404 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:57.378819  501404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:52:57.378898  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:57.381220  501404 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:57.381243  501404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:52:57.381304  501404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:52:57.428762  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:57.431213  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:57.445049  501404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:52:57.853002  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:52:57.853029  501404 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:52:57.950303  501404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:52:57.978681  501404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:52:57.986714  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:52:57.986736  501404 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:52:58.060959  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:52:58.061032  501404 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:52:58.066642  501404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:52:58.072191  501404 node_ready.go:35] waiting up to 6m0s for node "no-preload-548708" to be "Ready" ...
	I1101 10:52:58.148098  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:52:58.148118  501404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:52:58.252137  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:52:58.252212  501404 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:52:58.323175  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:52:58.323249  501404 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:52:58.443259  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:52:58.443283  501404 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:52:58.509163  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:52:58.509189  501404 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:52:58.528633  501404 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:52:58.528658  501404 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:52:58.578067  501404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:53:05.255243  501404 node_ready.go:49] node "no-preload-548708" is "Ready"
	I1101 10:53:05.255273  501404 node_ready.go:38] duration metric: took 7.183043933s for node "no-preload-548708" to be "Ready" ...
	I1101 10:53:05.255286  501404 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:53:05.255346  501404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> CRI-O <==
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.376731686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.381213205Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-2psfb/POD" id=ca49d9d8-9149-4a1c-a187-035fa650f138 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.381856845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.404696984Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ca49d9d8-9149-4a1c-a187-035fa650f138 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.4056432Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7f748768-9cc7-4919-b857-dcfd5f72d5a2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.427949663Z" level=info msg="Ran pod sandbox 2777922d5fdb000f43008b2ccd28e42f5a612ca78434b5162ea2aca82190fbe1 with infra container: kube-system/kindnet-mlxls/POD" id=7f748768-9cc7-4919-b857-dcfd5f72d5a2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.435485971Z" level=info msg="Ran pod sandbox 78cd2bfb4ad676c73a091af3adf3ff15477bf611471b48d879d50fd76c328371 with infra container: kube-system/kube-proxy-2psfb/POD" id=ca49d9d8-9149-4a1c-a187-035fa650f138 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.444201416Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e5c290df-b89b-4d44-b2c7-187eef5b60f0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.444994055Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=82b83ab6-d63c-45a0-a370-cd7550d58f14 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.466992684Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f6e3b379-134c-4c23-a6ec-cffb471a9f48 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.467028557Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=70ec4b2b-7e8b-440c-b3a3-a657f3eac0df name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.468586003Z" level=info msg="Creating container: kube-system/kube-proxy-2psfb/kube-proxy" id=a721d30c-d3a6-4dd0-9030-32190dd6d028 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.468703863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.468729644Z" level=info msg="Creating container: kube-system/kindnet-mlxls/kindnet-cni" id=906debd3-acca-4be9-b29b-e0791f53a673 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.468809292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.529518737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.53699532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.577820133Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.578795585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.682976818Z" level=info msg="Created container a88721b15f260f8a89963171573d649f82fb4cc278159302fc14d86567df47ff: kube-system/kindnet-mlxls/kindnet-cni" id=906debd3-acca-4be9-b29b-e0791f53a673 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.686730839Z" level=info msg="Starting container: a88721b15f260f8a89963171573d649f82fb4cc278159302fc14d86567df47ff" id=88830b63-e260-4244-a57c-c9b28c115df7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.689923738Z" level=info msg="Started container" PID=1058 containerID=a88721b15f260f8a89963171573d649f82fb4cc278159302fc14d86567df47ff description=kube-system/kindnet-mlxls/kindnet-cni id=88830b63-e260-4244-a57c-c9b28c115df7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2777922d5fdb000f43008b2ccd28e42f5a612ca78434b5162ea2aca82190fbe1
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.959213992Z" level=info msg="Created container fdfa04af4c179f64d30bb2228675311ae707dab7e617e03be48fff445c30bbf9: kube-system/kube-proxy-2psfb/kube-proxy" id=a721d30c-d3a6-4dd0-9030-32190dd6d028 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.959969035Z" level=info msg="Starting container: fdfa04af4c179f64d30bb2228675311ae707dab7e617e03be48fff445c30bbf9" id=9632c096-46a2-47c9-9d8e-f492949d2a65 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:52:55 newest-cni-196911 crio[611]: time="2025-11-01T10:52:55.970253446Z" level=info msg="Started container" PID=1059 containerID=fdfa04af4c179f64d30bb2228675311ae707dab7e617e03be48fff445c30bbf9 description=kube-system/kube-proxy-2psfb/kube-proxy id=9632c096-46a2-47c9-9d8e-f492949d2a65 name=/runtime.v1.RuntimeService/StartContainer sandboxID=78cd2bfb4ad676c73a091af3adf3ff15477bf611471b48d879d50fd76c328371
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a88721b15f260       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   11 seconds ago      Running             kindnet-cni               1                   2777922d5fdb0       kindnet-mlxls                               kube-system
	fdfa04af4c179       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   11 seconds ago      Running             kube-proxy                1                   78cd2bfb4ad67       kube-proxy-2psfb                            kube-system
	292d0cbb536ac       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   19 seconds ago      Running             etcd                      1                   cc7f336b19733       etcd-newest-cni-196911                      kube-system
	43cff061f63df       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   19 seconds ago      Running             kube-controller-manager   1                   75a287967e9b2       kube-controller-manager-newest-cni-196911   kube-system
	4c9e83d09d804       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   19 seconds ago      Running             kube-apiserver            1                   e82d6ae2c3a52       kube-apiserver-newest-cni-196911            kube-system
	6ee79706bb2c3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   19 seconds ago      Running             kube-scheduler            1                   1c01607ecb3b8       kube-scheduler-newest-cni-196911            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-196911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-196911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=newest-cni-196911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_52_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:52:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-196911
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:52:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:52:54 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:52:54 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:52:54 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:52:54 +0000   Sat, 01 Nov 2025 10:52:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-196911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1a13745d-d4b0-4a25-a286-6bb43ff747ac
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-196911                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-mlxls                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      34s
	  kube-system                 kube-apiserver-newest-cni-196911             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-newest-cni-196911    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-2psfb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-newest-cni-196911             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 32s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientPID     39s                kubelet          Node newest-cni-196911 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 39s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  39s                kubelet          Node newest-cni-196911 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s                kubelet          Node newest-cni-196911 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 39s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           35s                node-controller  Node newest-cni-196911 event: Registered Node newest-cni-196911 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 21s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node newest-cni-196911 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node newest-cni-196911 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x8 over 21s)  kubelet          Node newest-cni-196911 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7s                 node-controller  Node newest-cni-196911 event: Registered Node newest-cni-196911 in Controller
	
	
	==> dmesg <==
	[ +16.177341] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[ +42.559605] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:50] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:51] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:52] overlayfs: idmapped layers are currently not supported
	[ +26.480177] overlayfs: idmapped layers are currently not supported
	[  +9.079378] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [292d0cbb536acc09cd84b96d1b822feb61d97070a176a9932123014a40ee60cb] <==
	{"level":"warn","ts":"2025-11-01T10:52:51.071622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.133080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.172278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.218972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.260713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.302849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.391992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.425362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.465796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.516968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.559828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.583947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.605603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.621401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.667928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.685398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.717930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.740953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.762987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.826120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.905416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.943085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:51.997641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:52.042465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:52.222554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43404","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:53:07 up  2:35,  0 user,  load average: 8.08, 4.76, 3.37
	Linux newest-cni-196911 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a88721b15f260f8a89963171573d649f82fb4cc278159302fc14d86567df47ff] <==
	I1101 10:52:55.931367       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:52:55.932717       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:52:55.939612       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:52:55.939637       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:52:55.939652       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:52:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:52:56.218276       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:52:56.218361       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:52:56.218395       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:52:56.219608       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [4c9e83d09d804cacddc0212f96f7746196a7c47d338ed0e9519993cbb75d1314] <==
	I1101 10:52:54.374170       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:52:54.375860       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:52:54.395929       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:52:54.409211       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:52:54.478722       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:52:54.478786       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:52:54.674328       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:52:54.958531       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:52:55.000071       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:52:55.000440       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:52:56.552729       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:52:56.836911       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:52:57.125530       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:52:57.197318       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:52:57.913624       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.140.221"}
	I1101 10:52:58.007248       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.159.160"}
	E1101 10:53:00.643703       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1101 10:53:00.650459       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-11-01T10:53:00.651527Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001995a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1101 10:53:00.651632       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 1.094763ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1101 10:53:00.651818       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1101 10:53:00.653429       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="9.808386ms" method="PATCH" path="/api/v1/namespaces/kube-system/pods/kube-controller-manager-newest-cni-196911/status" result=null
	I1101 10:53:00.812486       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:53:00.839989       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:53:00.859396       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [43cff061f63df9268ac8b9a55804a126d15f4a912d0b682729bc41fab87e54d4] <==
	I1101 10:53:00.509653       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:53:00.538598       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:53:00.557013       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:53:00.557130       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:53:00.571246       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:53:00.559864       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:53:00.559880       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:53:00.578269       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:53:00.559921       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:53:00.571394       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:53:00.557548       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:53:00.589805       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:53:00.591067       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:53:00.578459       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:53:00.578473       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:53:00.606955       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:53:00.571364       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:53:00.607280       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:53:00.571537       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:53:00.571549       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:53:00.608314       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:53:00.663519       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:53:00.663591       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:53:00.663623       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:53:00.765109       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [fdfa04af4c179f64d30bb2228675311ae707dab7e617e03be48fff445c30bbf9] <==
	I1101 10:52:58.361282       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:52:58.518093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:52:58.953850       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:52:58.953970       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:52:58.981082       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:53:00.662480       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:53:00.662613       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:53:01.142127       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:53:01.142547       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:53:01.142761       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:53:01.144217       1 config.go:200] "Starting service config controller"
	I1101 10:53:01.145018       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:53:01.145075       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:53:01.145105       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:53:01.145142       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:53:01.145170       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:53:01.145857       1 config.go:309] "Starting node config controller"
	I1101 10:53:01.151702       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:53:01.151746       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:53:01.245110       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:53:01.250825       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:53:01.250845       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6ee79706bb2c3b2a369e20eed26ccdb5985aa7c70ae1cd34024086e323278927] <==
	I1101 10:52:55.187486       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:53:02.298331       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:53:02.298374       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:53:02.305959       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:53:02.306234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:53:02.306408       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:53:02.306208       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:53:02.306487       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:53:02.306246       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:53:02.330781       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:53:02.306260       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:53:02.418260       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:53:02.419383       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:53:02.434920       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:52:51 newest-cni-196911 kubelet[730]: E1101 10:52:51.145725     730 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-196911\" not found" node="newest-cni-196911"
	Nov 01 10:52:53 newest-cni-196911 kubelet[730]: E1101 10:52:53.598188     730 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-196911\" not found" node="newest-cni-196911"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.038395     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-196911"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.693038     730 apiserver.go:52] "Watching apiserver"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.833497     730 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.862021     730 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-196911"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.862121     730 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-196911"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.862150     730 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873436     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d6d41c4-8fef-48d4-ab11-4f2c76c278e6-xtables-lock\") pod \"kindnet-mlxls\" (UID: \"0d6d41c4-8fef-48d4-ab11-4f2c76c278e6\") " pod="kube-system/kindnet-mlxls"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873493     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc92af6a-7726-496b-8f2c-e315e3065bf2-lib-modules\") pod \"kube-proxy-2psfb\" (UID: \"fc92af6a-7726-496b-8f2c-e315e3065bf2\") " pod="kube-system/kube-proxy-2psfb"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873534     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc92af6a-7726-496b-8f2c-e315e3065bf2-xtables-lock\") pod \"kube-proxy-2psfb\" (UID: \"fc92af6a-7726-496b-8f2c-e315e3065bf2\") " pod="kube-system/kube-proxy-2psfb"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873554     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0d6d41c4-8fef-48d4-ab11-4f2c76c278e6-cni-cfg\") pod \"kindnet-mlxls\" (UID: \"0d6d41c4-8fef-48d4-ab11-4f2c76c278e6\") " pod="kube-system/kindnet-mlxls"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873574     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d6d41c4-8fef-48d4-ab11-4f2c76c278e6-lib-modules\") pod \"kindnet-mlxls\" (UID: \"0d6d41c4-8fef-48d4-ab11-4f2c76c278e6\") " pod="kube-system/kindnet-mlxls"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.873858     730 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: E1101 10:52:54.991686     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-196911\" already exists" pod="kube-system/kube-scheduler-newest-cni-196911"
	Nov 01 10:52:54 newest-cni-196911 kubelet[730]: I1101 10:52:54.991727     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-196911"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: I1101 10:52:55.197470     730 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: E1101 10:52:55.241242     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-196911\" already exists" pod="kube-system/etcd-newest-cni-196911"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: I1101 10:52:55.241279     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-196911"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: E1101 10:52:55.310446     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-196911\" already exists" pod="kube-system/kube-apiserver-newest-cni-196911"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: I1101 10:52:55.310483     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-196911"
	Nov 01 10:52:55 newest-cni-196911 kubelet[730]: E1101 10:52:55.442820     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-196911\" already exists" pod="kube-system/kube-controller-manager-newest-cni-196911"
	Nov 01 10:53:00 newest-cni-196911 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:53:00 newest-cni-196911 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:53:00 newest-cni-196911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-196911 -n newest-cni-196911
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-196911 -n newest-cni-196911: exit status 2 (490.356452ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-196911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-nrbdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vlwr4 kubernetes-dashboard-855c9754f9-mfggn
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-196911 describe pod coredns-66bc5c9577-nrbdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vlwr4 kubernetes-dashboard-855c9754f9-mfggn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-196911 describe pod coredns-66bc5c9577-nrbdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vlwr4 kubernetes-dashboard-855c9754f9-mfggn: exit status 1 (85.736221ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-nrbdx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-vlwr4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-mfggn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-196911 describe pod coredns-66bc5c9577-nrbdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vlwr4 kubernetes-dashboard-855c9754f9-mfggn: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-548708 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-548708 --alsologtostderr -v=1: exit status 80 (1.967891358s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-548708 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:54:01.825874  508104 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:54:01.826012  508104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:54:01.826025  508104 out.go:374] Setting ErrFile to fd 2...
	I1101 10:54:01.826032  508104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:54:01.826327  508104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:54:01.826636  508104 out.go:368] Setting JSON to false
	I1101 10:54:01.826948  508104 mustload.go:66] Loading cluster: no-preload-548708
	I1101 10:54:01.827410  508104 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:54:01.828124  508104 cli_runner.go:164] Run: docker container inspect no-preload-548708 --format={{.State.Status}}
	I1101 10:54:01.857553  508104 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:54:01.857875  508104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:54:01.926198  508104 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:54:01.913006299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:54:01.926970  508104 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-548708 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:54:01.930541  508104 out.go:179] * Pausing node no-preload-548708 ... 
	I1101 10:54:01.934081  508104 host.go:66] Checking if "no-preload-548708" exists ...
	I1101 10:54:01.936134  508104 ssh_runner.go:195] Run: systemctl --version
	I1101 10:54:01.936218  508104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-548708
	I1101 10:54:01.954296  508104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/no-preload-548708/id_rsa Username:docker}
	I1101 10:54:02.067767  508104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:54:02.081755  508104 pause.go:52] kubelet running: true
	I1101 10:54:02.081839  508104 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:54:02.364364  508104 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:54:02.364452  508104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:54:02.437601  508104 cri.go:89] found id: "ebe2d6e71d49987999115a6dbf899bb298ed040585a1bb35ed5195ebc4afd3c3"
	I1101 10:54:02.437624  508104 cri.go:89] found id: "12f9f2ae7561486cf3a5cf5e25b0238244bb53590abd5eceab13baaaf91bbfc5"
	I1101 10:54:02.437634  508104 cri.go:89] found id: "8880cc0aa44ad7c73eacefbffb811b0a869e18784d7193a9c59efd28558a6c37"
	I1101 10:54:02.437638  508104 cri.go:89] found id: "31026d42f589e36ffbd94fb6e3033d7d6cf0ed9de81d4521fc55197785d8b107"
	I1101 10:54:02.437642  508104 cri.go:89] found id: "8d15cf2b7e1327dad8ab5a10c985a4b55630ff084d152dd39f5ad16057f2347f"
	I1101 10:54:02.437645  508104 cri.go:89] found id: "21b6a3d81852a5fbef2e31f92ee373c1322e58d33d0a4c6198b4f9654e688b41"
	I1101 10:54:02.437649  508104 cri.go:89] found id: "4d7c8dba98a1808a309fd3d7927f59223183ac53462318916d991ce724a3d765"
	I1101 10:54:02.437651  508104 cri.go:89] found id: "f5f4bd6b7426cda5e69e50ee4f6e6167b783e0bd20ec2f2ea8043896373ef992"
	I1101 10:54:02.437655  508104 cri.go:89] found id: "1d6ce9e953a8b3c836603bef290e36c2eae37f5508055cd9ebe57279220b4715"
	I1101 10:54:02.437661  508104 cri.go:89] found id: "6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089"
	I1101 10:54:02.437664  508104 cri.go:89] found id: "def31cf7c49fbf2f7792ca869ef727ba0840aa7fc1d1f37c7800d617e02e98cc"
	I1101 10:54:02.437667  508104 cri.go:89] found id: ""
	I1101 10:54:02.437718  508104 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:54:02.457545  508104 retry.go:31] will retry after 198.697974ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:54:02Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:54:02.657123  508104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:54:02.671040  508104 pause.go:52] kubelet running: false
	I1101 10:54:02.671116  508104 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:54:02.850132  508104 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:54:02.850248  508104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:54:02.934009  508104 cri.go:89] found id: "ebe2d6e71d49987999115a6dbf899bb298ed040585a1bb35ed5195ebc4afd3c3"
	I1101 10:54:02.934045  508104 cri.go:89] found id: "12f9f2ae7561486cf3a5cf5e25b0238244bb53590abd5eceab13baaaf91bbfc5"
	I1101 10:54:02.934051  508104 cri.go:89] found id: "8880cc0aa44ad7c73eacefbffb811b0a869e18784d7193a9c59efd28558a6c37"
	I1101 10:54:02.934055  508104 cri.go:89] found id: "31026d42f589e36ffbd94fb6e3033d7d6cf0ed9de81d4521fc55197785d8b107"
	I1101 10:54:02.934059  508104 cri.go:89] found id: "8d15cf2b7e1327dad8ab5a10c985a4b55630ff084d152dd39f5ad16057f2347f"
	I1101 10:54:02.934062  508104 cri.go:89] found id: "21b6a3d81852a5fbef2e31f92ee373c1322e58d33d0a4c6198b4f9654e688b41"
	I1101 10:54:02.934067  508104 cri.go:89] found id: "4d7c8dba98a1808a309fd3d7927f59223183ac53462318916d991ce724a3d765"
	I1101 10:54:02.934070  508104 cri.go:89] found id: "f5f4bd6b7426cda5e69e50ee4f6e6167b783e0bd20ec2f2ea8043896373ef992"
	I1101 10:54:02.934073  508104 cri.go:89] found id: "1d6ce9e953a8b3c836603bef290e36c2eae37f5508055cd9ebe57279220b4715"
	I1101 10:54:02.934080  508104 cri.go:89] found id: "6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089"
	I1101 10:54:02.934083  508104 cri.go:89] found id: "def31cf7c49fbf2f7792ca869ef727ba0840aa7fc1d1f37c7800d617e02e98cc"
	I1101 10:54:02.934086  508104 cri.go:89] found id: ""
	I1101 10:54:02.934145  508104 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:54:02.945921  508104 retry.go:31] will retry after 463.125345ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:54:02Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:54:03.409501  508104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:54:03.423023  508104 pause.go:52] kubelet running: false
	I1101 10:54:03.423106  508104 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:54:03.613179  508104 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:54:03.613264  508104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:54:03.692129  508104 cri.go:89] found id: "ebe2d6e71d49987999115a6dbf899bb298ed040585a1bb35ed5195ebc4afd3c3"
	I1101 10:54:03.692150  508104 cri.go:89] found id: "12f9f2ae7561486cf3a5cf5e25b0238244bb53590abd5eceab13baaaf91bbfc5"
	I1101 10:54:03.692156  508104 cri.go:89] found id: "8880cc0aa44ad7c73eacefbffb811b0a869e18784d7193a9c59efd28558a6c37"
	I1101 10:54:03.692160  508104 cri.go:89] found id: "31026d42f589e36ffbd94fb6e3033d7d6cf0ed9de81d4521fc55197785d8b107"
	I1101 10:54:03.692164  508104 cri.go:89] found id: "8d15cf2b7e1327dad8ab5a10c985a4b55630ff084d152dd39f5ad16057f2347f"
	I1101 10:54:03.692168  508104 cri.go:89] found id: "21b6a3d81852a5fbef2e31f92ee373c1322e58d33d0a4c6198b4f9654e688b41"
	I1101 10:54:03.692205  508104 cri.go:89] found id: "4d7c8dba98a1808a309fd3d7927f59223183ac53462318916d991ce724a3d765"
	I1101 10:54:03.692209  508104 cri.go:89] found id: "f5f4bd6b7426cda5e69e50ee4f6e6167b783e0bd20ec2f2ea8043896373ef992"
	I1101 10:54:03.692213  508104 cri.go:89] found id: "1d6ce9e953a8b3c836603bef290e36c2eae37f5508055cd9ebe57279220b4715"
	I1101 10:54:03.692234  508104 cri.go:89] found id: "6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089"
	I1101 10:54:03.692244  508104 cri.go:89] found id: "def31cf7c49fbf2f7792ca869ef727ba0840aa7fc1d1f37c7800d617e02e98cc"
	I1101 10:54:03.692248  508104 cri.go:89] found id: ""
	I1101 10:54:03.692314  508104 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:54:03.707576  508104 out.go:203] 
	W1101 10:54:03.710570  508104 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:54:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:54:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:54:03.710597  508104 out.go:285] * 
	* 
	W1101 10:54:03.716412  508104 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:54:03.719568  508104 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-548708 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-548708
helpers_test.go:243: (dbg) docker inspect no-preload-548708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e",
	        "Created": "2025-11-01T10:51:12.134501468Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501620,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:52:46.461409069Z",
	            "FinishedAt": "2025-11-01T10:52:45.404965141Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/hostname",
	        "HostsPath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/hosts",
	        "LogPath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e-json.log",
	        "Name": "/no-preload-548708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-548708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-548708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e",
	                "LowerDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-548708",
	                "Source": "/var/lib/docker/volumes/no-preload-548708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-548708",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-548708",
	                "name.minikube.sigs.k8s.io": "no-preload-548708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3dba102ad32c44dc2bf32f97bb168c2ce96dd02241da396057c14cabb2c31d0",
	            "SandboxKey": "/var/run/docker/netns/c3dba102ad32",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-548708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:36:9a:0d:0f:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "458d9289c1e4678d575d4635bc902fe82bbd4c6f42dd0c954078044d50841590",
	                    "EndpointID": "f6695f291e1e7f52b0b944b72e169fe955e3ad875bda491578ea2f75996a375a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-548708",
	                        "965e3c07903f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-548708 -n no-preload-548708
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-548708 -n no-preload-548708: exit status 2 (394.428219ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-548708 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-548708 logs -n 25: (1.388495377s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-014050 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p disable-driver-mounts-514829                                                                                                                                                                                                               │ disable-driver-mounts-514829 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:52 UTC │
	│ image   │ embed-certs-499088 image list --format=json                                                                                                                                                                                                   │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ pause   │ -p embed-certs-499088 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-548708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ stop    │ -p no-preload-548708 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable metrics-server -p newest-cni-196911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ stop    │ -p newest-cni-196911 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable dashboard -p newest-cni-196911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ start   │ -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable dashboard -p no-preload-548708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:53 UTC │
	│ image   │ newest-cni-196911 image list --format=json                                                                                                                                                                                                    │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ pause   │ -p newest-cni-196911 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ delete  │ -p newest-cni-196911                                                                                                                                                                                                                          │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ delete  │ -p newest-cni-196911                                                                                                                                                                                                                          │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ start   │ -p auto-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-883951                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │                     │
	│ image   │ no-preload-548708 image list --format=json                                                                                                                                                                                                    │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:54 UTC │ 01 Nov 25 10:54 UTC │
	│ pause   │ -p no-preload-548708 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:53:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:53:11.381389  505282 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:53:11.381699  505282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:53:11.381782  505282 out.go:374] Setting ErrFile to fd 2...
	I1101 10:53:11.381893  505282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:53:11.382519  505282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:53:11.383154  505282 out.go:368] Setting JSON to false
	I1101 10:53:11.384212  505282 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9343,"bootTime":1761985048,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:53:11.384309  505282 start.go:143] virtualization:  
	I1101 10:53:11.388174  505282 out.go:179] * [auto-883951] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:53:11.391336  505282 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:53:11.391402  505282 notify.go:221] Checking for updates...
	I1101 10:53:11.399114  505282 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:53:11.402172  505282 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:53:11.405666  505282 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:53:11.408698  505282 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:53:11.411697  505282 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:53:11.415525  505282 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:53:11.415700  505282 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:53:11.461015  505282 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:53:11.461159  505282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:53:11.571448  505282 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:11.559552999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:53:11.571550  505282 docker.go:319] overlay module found
	I1101 10:53:11.575132  505282 out.go:179] * Using the docker driver based on user configuration
	I1101 10:53:11.578081  505282 start.go:309] selected driver: docker
	I1101 10:53:11.578106  505282 start.go:930] validating driver "docker" against <nil>
	I1101 10:53:11.578121  505282 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:53:11.578837  505282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:53:11.678205  505282 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:11.663918825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:53:11.678553  505282 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:53:11.678890  505282 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:53:11.681990  505282 out.go:179] * Using Docker driver with root privileges
	I1101 10:53:11.685017  505282 cni.go:84] Creating CNI manager for ""
	I1101 10:53:11.685085  505282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:53:11.685093  505282 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:53:11.685182  505282 start.go:353] cluster config:
	{Name:auto-883951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-883951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1101 10:53:11.688404  505282 out.go:179] * Starting "auto-883951" primary control-plane node in "auto-883951" cluster
	I1101 10:53:11.691226  505282 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:53:11.694197  505282 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:53:11.697060  505282 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:53:11.697132  505282 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:53:11.697161  505282 cache.go:59] Caching tarball of preloaded images
	I1101 10:53:11.697260  505282 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:53:11.697269  505282 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:53:11.697373  505282 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/config.json ...
	I1101 10:53:11.697390  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/config.json: {Name:mk3308d6ffdcf444ab84398b5bcc995b81908c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:11.697530  505282 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:53:11.718557  505282 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:53:11.718576  505282 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:53:11.718590  505282 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:53:11.718612  505282 start.go:360] acquireMachinesLock for auto-883951: {Name:mkaa0972c90dc13698f55ed05c022d37ae86426e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:53:11.718705  505282 start.go:364] duration metric: took 77.096µs to acquireMachinesLock for "auto-883951"
	I1101 10:53:11.718755  505282 start.go:93] Provisioning new machine with config: &{Name:auto-883951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-883951 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:53:11.718830  505282 start.go:125] createHost starting for "" (driver="docker")
	W1101 10:53:12.395506  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:14.396502  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:11.722433  505282 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:53:11.722680  505282 start.go:159] libmachine.API.Create for "auto-883951" (driver="docker")
	I1101 10:53:11.722717  505282 client.go:173] LocalClient.Create starting
	I1101 10:53:11.722802  505282 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem
	I1101 10:53:11.722836  505282 main.go:143] libmachine: Decoding PEM data...
	I1101 10:53:11.722853  505282 main.go:143] libmachine: Parsing certificate...
	I1101 10:53:11.722914  505282 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem
	I1101 10:53:11.722945  505282 main.go:143] libmachine: Decoding PEM data...
	I1101 10:53:11.722954  505282 main.go:143] libmachine: Parsing certificate...
	I1101 10:53:11.723306  505282 cli_runner.go:164] Run: docker network inspect auto-883951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:53:11.752461  505282 cli_runner.go:211] docker network inspect auto-883951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:53:11.752549  505282 network_create.go:284] running [docker network inspect auto-883951] to gather additional debugging logs...
	I1101 10:53:11.752567  505282 cli_runner.go:164] Run: docker network inspect auto-883951
	W1101 10:53:11.769644  505282 cli_runner.go:211] docker network inspect auto-883951 returned with exit code 1
	I1101 10:53:11.769688  505282 network_create.go:287] error running [docker network inspect auto-883951]: docker network inspect auto-883951: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-883951 not found
	I1101 10:53:11.769711  505282 network_create.go:289] output of [docker network inspect auto-883951]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-883951 not found
	
	** /stderr **
	I1101 10:53:11.769969  505282 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:53:11.786799  505282 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5e2665991a3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:25:1a:f9:12:ec} reservation:<nil>}
	I1101 10:53:11.787184  505282 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-adecbbb769f0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:b0:b5:2e:4c:30} reservation:<nil>}
	I1101 10:53:11.787419  505282 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2077d26d1806 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:49:68:b6:9e:fb} reservation:<nil>}
	I1101 10:53:11.787870  505282 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7680}
	I1101 10:53:11.787896  505282 network_create.go:124] attempt to create docker network auto-883951 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 10:53:11.787952  505282 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-883951 auto-883951
	I1101 10:53:11.851033  505282 network_create.go:108] docker network auto-883951 192.168.76.0/24 created
	I1101 10:53:11.851067  505282 kic.go:121] calculated static IP "192.168.76.2" for the "auto-883951" container
	I1101 10:53:11.851165  505282 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:53:11.867744  505282 cli_runner.go:164] Run: docker volume create auto-883951 --label name.minikube.sigs.k8s.io=auto-883951 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:53:11.891329  505282 oci.go:103] Successfully created a docker volume auto-883951
	I1101 10:53:11.891422  505282 cli_runner.go:164] Run: docker run --rm --name auto-883951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-883951 --entrypoint /usr/bin/test -v auto-883951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:53:12.748020  505282 oci.go:107] Successfully prepared a docker volume auto-883951
	I1101 10:53:12.748067  505282 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:53:12.748086  505282 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:53:12.748170  505282 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-883951:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 10:53:16.396605  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:18.409697  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:20.894807  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:17.829021  505282 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-883951:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.080811105s)
	I1101 10:53:17.829056  505282 kic.go:203] duration metric: took 5.08096661s to extract preloaded images to volume ...
	W1101 10:53:17.829207  505282 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:53:17.829330  505282 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:53:17.946497  505282 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-883951 --name auto-883951 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-883951 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-883951 --network auto-883951 --ip 192.168.76.2 --volume auto-883951:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:53:18.400000  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Running}}
	I1101 10:53:18.431137  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:18.457653  505282 cli_runner.go:164] Run: docker exec auto-883951 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:53:18.530190  505282 oci.go:144] the created container "auto-883951" has a running status.
	I1101 10:53:18.530216  505282 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa...
	I1101 10:53:19.690783  505282 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:53:19.719084  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:19.747896  505282 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:53:19.747914  505282 kic_runner.go:114] Args: [docker exec --privileged auto-883951 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:53:19.836854  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:19.865231  505282 machine.go:94] provisionDockerMachine start ...
	I1101 10:53:19.865330  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:19.888152  505282 main.go:143] libmachine: Using SSH client type: native
	I1101 10:53:19.888493  505282 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1101 10:53:19.888509  505282 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:53:19.889225  505282 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1101 10:53:22.895825  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:24.899773  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:23.063130  505282 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-883951
	
	I1101 10:53:23.063152  505282 ubuntu.go:182] provisioning hostname "auto-883951"
	I1101 10:53:23.063310  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:23.091608  505282 main.go:143] libmachine: Using SSH client type: native
	I1101 10:53:23.091957  505282 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1101 10:53:23.091972  505282 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-883951 && echo "auto-883951" | sudo tee /etc/hostname
	I1101 10:53:23.271045  505282 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-883951
	
	I1101 10:53:23.271205  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:23.323807  505282 main.go:143] libmachine: Using SSH client type: native
	I1101 10:53:23.324424  505282 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1101 10:53:23.324450  505282 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-883951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-883951/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-883951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:53:23.526870  505282 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:53:23.526957  505282 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:53:23.527025  505282 ubuntu.go:190] setting up certificates
	I1101 10:53:23.527063  505282 provision.go:84] configureAuth start
	I1101 10:53:23.527170  505282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-883951
	I1101 10:53:23.551990  505282 provision.go:143] copyHostCerts
	I1101 10:53:23.552072  505282 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:53:23.552082  505282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:53:23.552178  505282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:53:23.552280  505282 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:53:23.552286  505282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:53:23.552312  505282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:53:23.552372  505282 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:53:23.552385  505282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:53:23.552410  505282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:53:23.552623  505282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.auto-883951 san=[127.0.0.1 192.168.76.2 auto-883951 localhost minikube]
	I1101 10:53:24.704827  505282 provision.go:177] copyRemoteCerts
	I1101 10:53:24.704898  505282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:53:24.704966  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:24.722518  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:24.829026  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:53:24.849263  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:53:24.868153  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:53:24.885928  505282 provision.go:87] duration metric: took 1.35882751s to configureAuth
	I1101 10:53:24.885961  505282 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:53:24.886142  505282 config.go:182] Loaded profile config "auto-883951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:53:24.886238  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:24.906334  505282 main.go:143] libmachine: Using SSH client type: native
	I1101 10:53:24.906645  505282 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1101 10:53:24.906666  505282 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:53:25.181106  505282 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:53:25.181136  505282 machine.go:97] duration metric: took 5.31588771s to provisionDockerMachine
	I1101 10:53:25.181146  505282 client.go:176] duration metric: took 13.45842325s to LocalClient.Create
	I1101 10:53:25.181163  505282 start.go:167] duration metric: took 13.458484764s to libmachine.API.Create "auto-883951"
	I1101 10:53:25.181171  505282 start.go:293] postStartSetup for "auto-883951" (driver="docker")
	I1101 10:53:25.181183  505282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:53:25.181244  505282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:53:25.181296  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:25.198905  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:25.309108  505282 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:53:25.312457  505282 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:53:25.312488  505282 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:53:25.312500  505282 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:53:25.312560  505282 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:53:25.312650  505282 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:53:25.312773  505282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:53:25.320290  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:53:25.338492  505282 start.go:296] duration metric: took 157.304556ms for postStartSetup
	I1101 10:53:25.338912  505282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-883951
	I1101 10:53:25.356731  505282 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/config.json ...
	I1101 10:53:25.357176  505282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:53:25.357232  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:25.374325  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:25.477983  505282 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:53:25.482699  505282 start.go:128] duration metric: took 13.763850155s to createHost
	I1101 10:53:25.482728  505282 start.go:83] releasing machines lock for "auto-883951", held for 13.764015343s
	I1101 10:53:25.482841  505282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-883951
	I1101 10:53:25.499495  505282 ssh_runner.go:195] Run: cat /version.json
	I1101 10:53:25.499508  505282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:53:25.499547  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:25.499575  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:25.516070  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:25.533282  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:25.722617  505282 ssh_runner.go:195] Run: systemctl --version
	I1101 10:53:25.729032  505282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:53:25.779784  505282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:53:25.785062  505282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:53:25.785138  505282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:53:25.813669  505282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:53:25.813736  505282 start.go:496] detecting cgroup driver to use...
	I1101 10:53:25.813804  505282 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:53:25.813872  505282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:53:25.832226  505282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:53:25.845315  505282 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:53:25.845433  505282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:53:25.863213  505282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:53:25.883470  505282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:53:26.014073  505282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:53:26.145935  505282 docker.go:234] disabling docker service ...
	I1101 10:53:26.146002  505282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:53:26.169568  505282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:53:26.183552  505282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:53:26.307742  505282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:53:26.436855  505282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:53:26.451481  505282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:53:26.473638  505282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:53:26.473756  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.483849  505282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:53:26.483969  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.493918  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.502735  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.515085  505282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:53:26.524007  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.533215  505282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.547439  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.556516  505282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:53:26.564701  505282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:53:26.572598  505282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:53:26.692883  505282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:53:27.154692  505282 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:53:27.154763  505282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:53:27.159135  505282 start.go:564] Will wait 60s for crictl version
	I1101 10:53:27.159200  505282 ssh_runner.go:195] Run: which crictl
	I1101 10:53:27.163486  505282 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:53:27.189797  505282 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:53:27.189881  505282 ssh_runner.go:195] Run: crio --version
	I1101 10:53:27.219285  505282 ssh_runner.go:195] Run: crio --version
	I1101 10:53:27.252795  505282 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:53:27.255889  505282 cli_runner.go:164] Run: docker network inspect auto-883951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:53:27.271754  505282 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:53:27.275357  505282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:53:27.284778  505282 kubeadm.go:884] updating cluster {Name:auto-883951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-883951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:53:27.284900  505282 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:53:27.285002  505282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:53:27.317318  505282 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:53:27.317344  505282 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:53:27.317399  505282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:53:27.342935  505282 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:53:27.342961  505282 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:53:27.342969  505282 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:53:27.343095  505282 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-883951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-883951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:53:27.343184  505282 ssh_runner.go:195] Run: crio config
	I1101 10:53:27.405482  505282 cni.go:84] Creating CNI manager for ""
	I1101 10:53:27.405502  505282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:53:27.405537  505282 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:53:27.405571  505282 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-883951 NodeName:auto-883951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:53:27.405703  505282 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-883951"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:53:27.405776  505282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:53:27.413878  505282 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:53:27.413946  505282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:53:27.423838  505282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1101 10:53:27.438623  505282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:53:27.455352  505282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1101 10:53:27.470023  505282 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:53:27.474199  505282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:53:27.484071  505282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:53:27.598023  505282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:53:27.617471  505282 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951 for IP: 192.168.76.2
	I1101 10:53:27.617498  505282 certs.go:195] generating shared ca certs ...
	I1101 10:53:27.617514  505282 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:27.617733  505282 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:53:27.617797  505282 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:53:27.617811  505282 certs.go:257] generating profile certs ...
	I1101 10:53:27.617883  505282 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.key
	I1101 10:53:27.617905  505282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.crt with IP's: []
	I1101 10:53:27.934966  505282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.crt ...
	I1101 10:53:27.934998  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.crt: {Name:mk13aa5637adee1bd3e03dd5586cbdc587a4c079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:27.935219  505282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.key ...
	I1101 10:53:27.935234  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.key: {Name:mk18e4bfc275e6f061acd7a655bde8aa84398d1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:27.935333  505282 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key.56036f0b
	I1101 10:53:27.935353  505282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt.56036f0b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:53:28.722042  505282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt.56036f0b ...
	I1101 10:53:28.722077  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt.56036f0b: {Name:mk108523cd1464e39ecc54dd12f9048e449b70c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:28.722263  505282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key.56036f0b ...
	I1101 10:53:28.722280  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key.56036f0b: {Name:mkc4500c3de9b25b1d6ccae4d40bfe72eb961be9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:28.722370  505282 certs.go:382] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt.56036f0b -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt
	I1101 10:53:28.722459  505282 certs.go:386] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key.56036f0b -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key
	I1101 10:53:28.722521  505282 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.key
	I1101 10:53:28.722541  505282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.crt with IP's: []
	I1101 10:53:29.398818  505282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.crt ...
	I1101 10:53:29.398849  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.crt: {Name:mkaa5658e3814c8033310ab2247b745a7c1e815b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:29.399027  505282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.key ...
	I1101 10:53:29.399040  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.key: {Name:mk268237e69825433f47c260896b2e64739f75a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:29.399232  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:53:29.399289  505282 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:53:29.399303  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:53:29.399327  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:53:29.399352  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:53:29.399380  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:53:29.399427  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:53:29.400065  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:53:29.418754  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:53:29.439190  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:53:29.457608  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:53:29.478788  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1101 10:53:29.496644  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:53:29.514823  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:53:29.532851  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:53:29.552153  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:53:29.570919  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:53:29.588755  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:53:29.607019  505282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:53:29.619816  505282 ssh_runner.go:195] Run: openssl version
	I1101 10:53:29.626135  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:53:29.634371  505282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:53:29.637874  505282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:53:29.637938  505282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:53:29.679390  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:53:29.688027  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:53:29.696629  505282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:53:29.700480  505282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:53:29.700546  505282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:53:29.741679  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:53:29.750276  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:53:29.758556  505282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:53:29.762500  505282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:53:29.762595  505282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:53:29.803515  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:53:29.812412  505282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:53:29.815972  505282 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:53:29.816031  505282 kubeadm.go:401] StartCluster: {Name:auto-883951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-883951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:53:29.816110  505282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:53:29.816169  505282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:53:29.845324  505282 cri.go:89] found id: ""
	I1101 10:53:29.845465  505282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:53:29.854085  505282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:53:29.862092  505282 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:53:29.862188  505282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:53:29.869930  505282 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:53:29.869948  505282 kubeadm.go:158] found existing configuration files:
	
	I1101 10:53:29.870000  505282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:53:29.877649  505282 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:53:29.877736  505282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:53:29.885116  505282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:53:29.907124  505282 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:53:29.907211  505282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:53:29.916147  505282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:53:29.924882  505282 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:53:29.925048  505282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:53:29.933154  505282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:53:29.941813  505282 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:53:29.941949  505282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:53:29.950711  505282 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:53:29.999009  505282 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:53:29.999120  505282 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:53:30.081178  505282 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:53:30.081275  505282 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:53:30.081330  505282 kubeadm.go:319] OS: Linux
	I1101 10:53:30.081380  505282 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:53:30.081452  505282 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:53:30.081523  505282 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:53:30.081586  505282 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:53:30.081646  505282 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:53:30.081703  505282 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:53:30.081756  505282 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:53:30.081814  505282 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:53:30.081869  505282 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:53:30.164836  505282 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:53:30.165000  505282 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:53:30.165100  505282 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:53:30.174396  505282 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1101 10:53:27.395926  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:29.906941  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:30.181398  505282 out.go:252]   - Generating certificates and keys ...
	I1101 10:53:30.181513  505282 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:53:30.181588  505282 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:53:30.922619  505282 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1101 10:53:32.396133  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:34.401473  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:31.792687  505282 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:53:32.290477  505282 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:53:33.629044  505282 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:53:33.857140  505282 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:53:33.857566  505282 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-883951 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:53:35.170289  505282 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:53:35.170723  505282 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-883951 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:53:36.057361  505282 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:53:36.573999  505282 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:53:36.810692  505282 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:53:36.810963  505282 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:53:38.392250  505282 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:53:39.018660  505282 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:53:39.132302  505282 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:53:39.331768  505282 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:53:39.797907  505282 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:53:39.798774  505282 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:53:39.801505  505282 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1101 10:53:36.898170  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:38.900785  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:39.805014  505282 out.go:252]   - Booting up control plane ...
	I1101 10:53:39.805129  505282 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:53:39.805211  505282 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:53:39.805281  505282 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:53:39.820267  505282 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:53:39.820599  505282 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:53:39.828517  505282 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:53:39.828851  505282 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:53:39.829045  505282 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:53:39.968614  505282 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:53:39.968743  505282 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1101 10:53:41.396219  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:43.894238  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:45.895457  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:42.469859  505282 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.501448538s
	I1101 10:53:42.473317  505282 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:53:42.473416  505282 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 10:53:42.473697  505282 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:53:42.473789  505282 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:53:46.379937  505282 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.906043271s
	I1101 10:53:47.602248  505282 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.128868083s
	I1101 10:53:48.975611  505282 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502085025s
	I1101 10:53:48.997799  505282 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:53:49.021168  505282 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:53:49.041694  505282 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:53:49.041915  505282 kubeadm.go:319] [mark-control-plane] Marking the node auto-883951 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:53:49.056005  505282 kubeadm.go:319] [bootstrap-token] Using token: q7rxpo.fz9zwcghff9yrobk
	W1101 10:53:47.895947  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:48.394818  501404 pod_ready.go:94] pod "coredns-66bc5c9577-dt2gw" is "Ready"
	I1101 10:53:48.394895  501404 pod_ready.go:86] duration metric: took 40.505805583s for pod "coredns-66bc5c9577-dt2gw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.398016  501404 pod_ready.go:83] waiting for pod "etcd-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.403309  501404 pod_ready.go:94] pod "etcd-no-preload-548708" is "Ready"
	I1101 10:53:48.403347  501404 pod_ready.go:86] duration metric: took 5.262269ms for pod "etcd-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.405754  501404 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.410790  501404 pod_ready.go:94] pod "kube-apiserver-no-preload-548708" is "Ready"
	I1101 10:53:48.410818  501404 pod_ready.go:86] duration metric: took 5.035747ms for pod "kube-apiserver-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.413439  501404 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.592201  501404 pod_ready.go:94] pod "kube-controller-manager-no-preload-548708" is "Ready"
	I1101 10:53:48.592229  501404 pod_ready.go:86] duration metric: took 178.758801ms for pod "kube-controller-manager-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.795386  501404 pod_ready.go:83] waiting for pod "kube-proxy-m7vxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:49.193143  501404 pod_ready.go:94] pod "kube-proxy-m7vxc" is "Ready"
	I1101 10:53:49.193221  501404 pod_ready.go:86] duration metric: took 397.760273ms for pod "kube-proxy-m7vxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:49.393166  501404 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:49.792465  501404 pod_ready.go:94] pod "kube-scheduler-no-preload-548708" is "Ready"
	I1101 10:53:49.792493  501404 pod_ready.go:86] duration metric: took 399.298978ms for pod "kube-scheduler-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:49.792507  501404 pod_ready.go:40] duration metric: took 41.910354523s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:53:49.908060  501404 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:53:49.910530  501404 out.go:179] * Done! kubectl is now configured to use "no-preload-548708" cluster and "default" namespace by default
	I1101 10:53:49.059204  505282 out.go:252]   - Configuring RBAC rules ...
	I1101 10:53:49.059341  505282 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:53:49.070439  505282 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:53:49.090133  505282 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:53:49.095267  505282 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:53:49.101893  505282 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:53:49.106634  505282 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:53:49.385180  505282 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:53:49.862842  505282 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:53:50.383126  505282 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:53:50.384225  505282 kubeadm.go:319] 
	I1101 10:53:50.384309  505282 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:53:50.384316  505282 kubeadm.go:319] 
	I1101 10:53:50.384396  505282 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:53:50.384401  505282 kubeadm.go:319] 
	I1101 10:53:50.384428  505282 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:53:50.384490  505282 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:53:50.384542  505282 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:53:50.384565  505282 kubeadm.go:319] 
	I1101 10:53:50.384622  505282 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:53:50.384626  505282 kubeadm.go:319] 
	I1101 10:53:50.384677  505282 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:53:50.384681  505282 kubeadm.go:319] 
	I1101 10:53:50.384736  505282 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:53:50.384814  505282 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:53:50.384886  505282 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:53:50.384890  505282 kubeadm.go:319] 
	I1101 10:53:50.385085  505282 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:53:50.385168  505282 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:53:50.385172  505282 kubeadm.go:319] 
	I1101 10:53:50.385260  505282 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token q7rxpo.fz9zwcghff9yrobk \
	I1101 10:53:50.385368  505282 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 \
	I1101 10:53:50.385389  505282 kubeadm.go:319] 	--control-plane 
	I1101 10:53:50.385394  505282 kubeadm.go:319] 
	I1101 10:53:50.385482  505282 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:53:50.385487  505282 kubeadm.go:319] 
	I1101 10:53:50.385573  505282 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token q7rxpo.fz9zwcghff9yrobk \
	I1101 10:53:50.385686  505282 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 
	I1101 10:53:50.390899  505282 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:53:50.391134  505282 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:53:50.391242  505282 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:53:50.391258  505282 cni.go:84] Creating CNI manager for ""
	I1101 10:53:50.391265  505282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:53:50.394586  505282 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:53:50.397540  505282 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:53:50.402135  505282 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:53:50.402161  505282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:53:50.433365  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:53:51.195925  505282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:53:51.195984  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:51.196068  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-883951 minikube.k8s.io/updated_at=2025_11_01T10_53_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=auto-883951 minikube.k8s.io/primary=true
	I1101 10:53:51.368372  505282 ops.go:34] apiserver oom_adj: -16
	I1101 10:53:51.368567  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:51.869476  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:52.368915  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:52.869319  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:53.369494  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:53.868620  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:54.368734  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:54.869380  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:55.369084  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:55.536562  505282 kubeadm.go:1114] duration metric: took 4.340630441s to wait for elevateKubeSystemPrivileges
	I1101 10:53:55.536587  505282 kubeadm.go:403] duration metric: took 25.720559924s to StartCluster
	I1101 10:53:55.536604  505282 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:55.536658  505282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:53:55.537626  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:55.538814  505282 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:53:55.538914  505282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:53:55.539161  505282 config.go:182] Loaded profile config "auto-883951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:53:55.539190  505282 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:53:55.539245  505282 addons.go:70] Setting storage-provisioner=true in profile "auto-883951"
	I1101 10:53:55.539271  505282 addons.go:239] Setting addon storage-provisioner=true in "auto-883951"
	I1101 10:53:55.539293  505282 host.go:66] Checking if "auto-883951" exists ...
	I1101 10:53:55.539785  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:55.540179  505282 addons.go:70] Setting default-storageclass=true in profile "auto-883951"
	I1101 10:53:55.540198  505282 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-883951"
	I1101 10:53:55.540458  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:55.547795  505282 out.go:179] * Verifying Kubernetes components...
	I1101 10:53:55.552979  505282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:53:55.597712  505282 addons.go:239] Setting addon default-storageclass=true in "auto-883951"
	I1101 10:53:55.597749  505282 host.go:66] Checking if "auto-883951" exists ...
	I1101 10:53:55.598160  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:55.603567  505282 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:53:55.607903  505282 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:53:55.607926  505282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:53:55.607996  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:55.638703  505282 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:53:55.638724  505282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:53:55.638785  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:55.655025  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:55.701005  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:55.910716  505282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:53:55.910896  505282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:53:55.964165  505282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:53:55.973516  505282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:53:56.516226  505282 node_ready.go:35] waiting up to 15m0s for node "auto-883951" to be "Ready" ...
	I1101 10:53:56.517420  505282 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 10:53:56.817103  505282 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:53:56.819881  505282 addons.go:515] duration metric: took 1.280669833s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:53:57.022346  505282 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-883951" context rescaled to 1 replicas
	W1101 10:53:58.519543  505282 node_ready.go:57] node "auto-883951" has "Ready":"False" status (will retry)
	W1101 10:54:00.520480  505282 node_ready.go:57] node "auto-883951" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.961140031Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.973333375Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.973504051Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.973581393Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.97876586Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.9789266Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.97900555Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.986870132Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.987038552Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.987119242Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.991567226Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.991738993Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.916856778Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8226e75c-0f30-4008-889a-e4fa69c02ebc name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.918901679Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=97dfd1d2-9a29-4a3d-9fd0-df8c41518515 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.925059028Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s/dashboard-metrics-scraper" id=3e03f308-051c-419f-af70-0c5459a8c5e2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.925187825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.941971915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.942966049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.967433862Z" level=info msg="Created container 6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s/dashboard-metrics-scraper" id=3e03f308-051c-419f-af70-0c5459a8c5e2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.969824095Z" level=info msg="Starting container: 6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089" id=9bc98c11-1729-41a1-96d7-80c8f915007c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.977135334Z" level=info msg="Started container" PID=1716 containerID=6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s/dashboard-metrics-scraper id=9bc98c11-1729-41a1-96d7-80c8f915007c name=/runtime.v1.RuntimeService/StartContainer sandboxID=46e462c88591a8bbb801262bea0f7df07b98dc3d81d3bbc818b021b0f0be3239
	Nov 01 10:53:54 no-preload-548708 conmon[1714]: conmon 6695b8916a5bee6a5e76 <ninfo>: container 1716 exited with status 1
	Nov 01 10:53:55 no-preload-548708 crio[649]: time="2025-11-01T10:53:55.328999755Z" level=info msg="Removing container: ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae" id=a76f82d8-e21a-4e70-896a-d673f5203534 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:53:55 no-preload-548708 crio[649]: time="2025-11-01T10:53:55.336348229Z" level=info msg="Error loading conmon cgroup of container ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae: cgroup deleted" id=a76f82d8-e21a-4e70-896a-d673f5203534 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:53:55 no-preload-548708 crio[649]: time="2025-11-01T10:53:55.341326138Z" level=info msg="Removed container ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s/dashboard-metrics-scraper" id=a76f82d8-e21a-4e70-896a-d673f5203534 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6695b8916a5be       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago        Exited              dashboard-metrics-scraper   3                   46e462c88591a       dashboard-metrics-scraper-6ffb444bf9-g6j6s   kubernetes-dashboard
	ebe2d6e71d499       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           27 seconds ago       Running             storage-provisioner         2                   b848c8163d274       storage-provisioner                          kube-system
	def31cf7c49fb       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   73927acb1e0fb       kubernetes-dashboard-855c9754f9-l9drd        kubernetes-dashboard
	12f9f2ae75614       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   9fc36b0988d83       coredns-66bc5c9577-dt2gw                     kube-system
	c4615627c25fa       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   93ae9d9482d4f       busybox                                      default
	8880cc0aa44ad       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   b848c8163d274       storage-provisioner                          kube-system
	31026d42f589e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   6a7322e173754       kindnet-mwwlc                                kube-system
	8d15cf2b7e132       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   fd55fa544ff54       kube-proxy-m7vxc                             kube-system
	21b6a3d81852a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   da710e44631e3       kube-apiserver-no-preload-548708             kube-system
	4d7c8dba98a18       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   fe26f2fb3c1d0       kube-scheduler-no-preload-548708             kube-system
	f5f4bd6b7426c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   50501cfc8b1e4       kube-controller-manager-no-preload-548708    kube-system
	1d6ce9e953a8b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   603065add993f       etcd-no-preload-548708                       kube-system
	
	
	==> coredns [12f9f2ae7561486cf3a5cf5e25b0238244bb53590abd5eceab13baaaf91bbfc5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49918 - 43746 "HINFO IN 7306926777048817396.1324493943256063088. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011213708s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-548708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-548708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=no-preload-548708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_51_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:51:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-548708
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:53:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:53:56 +0000   Sat, 01 Nov 2025 10:51:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:53:56 +0000   Sat, 01 Nov 2025 10:51:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:53:56 +0000   Sat, 01 Nov 2025 10:51:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:53:56 +0000   Sat, 01 Nov 2025 10:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-548708
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0c3a0660-5fd6-454c-a1ce-cbee363950c2
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-66bc5c9577-dt2gw                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 etcd-no-preload-548708                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m8s
	  kube-system                 kindnet-mwwlc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-no-preload-548708              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-no-preload-548708     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-proxy-m7vxc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-no-preload-548708              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g6j6s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-l9drd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m                     kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   Starting                 2m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node no-preload-548708 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node no-preload-548708 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s (x8 over 2m19s)  kubelet          Node no-preload-548708 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m8s                   kubelet          Node no-preload-548708 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s                   kubelet          Node no-preload-548708 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m8s                   kubelet          Node no-preload-548708 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m4s                   node-controller  Node no-preload-548708 event: Registered Node no-preload-548708 in Controller
	  Normal   NodeReady                106s                   kubelet          Node no-preload-548708 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 70s)      kubelet          Node no-preload-548708 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 70s)      kubelet          Node no-preload-548708 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 70s)      kubelet          Node no-preload-548708 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node no-preload-548708 event: Registered Node no-preload-548708 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[ +42.559605] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:50] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:51] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:52] overlayfs: idmapped layers are currently not supported
	[ +26.480177] overlayfs: idmapped layers are currently not supported
	[  +9.079378] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1d6ce9e953a8b3c836603bef290e36c2eae37f5508055cd9ebe57279220b4715] <==
	{"level":"warn","ts":"2025-11-01T10:53:02.649550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:02.854134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:02.855547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:02.945802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:02.989339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:02.994969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.051771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.088195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.107408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.145307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.190534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.245515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.271737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.303969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.386744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.406070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.470226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.501273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.531210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.553485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.618187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.629309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.645717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.670319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.736678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48922","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:54:05 up  2:36,  0 user,  load average: 5.81, 4.67, 3.43
	Linux no-preload-548708 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [31026d42f589e36ffbd94fb6e3033d7d6cf0ed9de81d4521fc55197785d8b107] <==
	I1101 10:53:06.633899       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:53:06.634597       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:53:06.635071       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:53:06.635089       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:53:06.635104       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:53:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:53:06.952390       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:53:06.952410       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:53:06.952440       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:53:06.952803       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:53:36.950032       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:53:36.952492       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:53:36.953764       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:53:36.957205       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:53:38.153375       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:53:38.153437       1 metrics.go:72] Registering metrics
	I1101 10:53:38.154275       1 controller.go:711] "Syncing nftables rules"
	I1101 10:53:46.954412       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:53:46.954533       1 main.go:301] handling current node
	I1101 10:53:56.950133       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:53:56.950169       1 main.go:301] handling current node
	
	
	==> kube-apiserver [21b6a3d81852a5fbef2e31f92ee373c1322e58d33d0a4c6198b4f9654e688b41] <==
	I1101 10:53:05.081760       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:53:05.081836       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:53:05.081877       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:53:05.082038       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:53:05.096529       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:53:05.096719       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:53:05.096788       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:53:05.128459       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:53:05.143229       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:53:05.143679       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:53:05.143761       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:53:05.143809       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:53:05.276639       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:53:05.353514       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:53:05.593414       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:53:05.811580       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:53:07.295047       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:53:07.490394       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:53:07.559865       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:53:07.586584       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:53:07.715274       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.1.226"}
	I1101 10:53:07.750413       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.191.31"}
	I1101 10:53:09.565316       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:53:09.916557       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:53:09.956178       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f5f4bd6b7426cda5e69e50ee4f6e6167b783e0bd20ec2f2ea8043896373ef992] <==
	I1101 10:53:09.471453       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:53:09.471527       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:53:09.471742       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:53:09.480077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:53:09.479989       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:53:09.480145       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:53:09.480404       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:53:09.480439       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:53:09.480736       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:53:09.480768       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:53:09.490371       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:53:09.495383       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:53:09.496442       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:53:09.496503       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:53:09.496537       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:53:09.497689       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:53:09.497703       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:53:09.497780       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:53:09.497936       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-548708"
	I1101 10:53:09.498008       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:53:09.500533       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:53:09.501760       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:53:09.503949       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:53:09.508489       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:53:09.513787       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	
	
	==> kube-proxy [8d15cf2b7e1327dad8ab5a10c985a4b55630ff084d152dd39f5ad16057f2347f] <==
	I1101 10:53:06.712942       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:53:07.280177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:53:07.383647       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:53:07.383698       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:53:07.383767       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:53:07.634487       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:53:07.635280       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:53:07.651096       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:53:07.651811       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:53:07.651869       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:53:07.653851       1 config.go:200] "Starting service config controller"
	I1101 10:53:07.653916       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:53:07.653960       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:53:07.654005       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:53:07.654049       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:53:07.654088       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:53:07.654860       1 config.go:309] "Starting node config controller"
	I1101 10:53:07.654914       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:53:07.654922       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:53:07.754379       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:53:07.754423       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:53:07.754463       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4d7c8dba98a1808a309fd3d7927f59223183ac53462318916d991ce724a3d765] <==
	I1101 10:53:01.033700       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:53:05.177435       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:53:05.177468       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:53:05.177478       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:53:05.177485       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:53:05.343655       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:53:05.349055       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:53:05.356256       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:53:05.356300       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:53:05.361032       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:53:05.361175       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:53:05.459057       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:53:17 no-preload-548708 kubelet[769]: I1101 10:53:17.159146     769 scope.go:117] "RemoveContainer" containerID="205c0e47fde7f0695f45bdce6e05f761bb4cb942c28d9c5a6d8777272719618b"
	Nov 01 10:53:18 no-preload-548708 kubelet[769]: I1101 10:53:18.163923     769 scope.go:117] "RemoveContainer" containerID="205c0e47fde7f0695f45bdce6e05f761bb4cb942c28d9c5a6d8777272719618b"
	Nov 01 10:53:18 no-preload-548708 kubelet[769]: I1101 10:53:18.164208     769 scope.go:117] "RemoveContainer" containerID="6a1a2239089a5daf7e5e7ada43c6f0fed1efa6e285e41fea56567319a8a2e8a6"
	Nov 01 10:53:18 no-preload-548708 kubelet[769]: E1101 10:53:18.164355     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:53:19 no-preload-548708 kubelet[769]: I1101 10:53:19.214528     769 scope.go:117] "RemoveContainer" containerID="6a1a2239089a5daf7e5e7ada43c6f0fed1efa6e285e41fea56567319a8a2e8a6"
	Nov 01 10:53:19 no-preload-548708 kubelet[769]: E1101 10:53:19.214696     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:53:20 no-preload-548708 kubelet[769]: I1101 10:53:20.220017     769 scope.go:117] "RemoveContainer" containerID="6a1a2239089a5daf7e5e7ada43c6f0fed1efa6e285e41fea56567319a8a2e8a6"
	Nov 01 10:53:20 no-preload-548708 kubelet[769]: E1101 10:53:20.220196     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:53:32 no-preload-548708 kubelet[769]: I1101 10:53:32.916472     769 scope.go:117] "RemoveContainer" containerID="6a1a2239089a5daf7e5e7ada43c6f0fed1efa6e285e41fea56567319a8a2e8a6"
	Nov 01 10:53:33 no-preload-548708 kubelet[769]: I1101 10:53:33.258607     769 scope.go:117] "RemoveContainer" containerID="6a1a2239089a5daf7e5e7ada43c6f0fed1efa6e285e41fea56567319a8a2e8a6"
	Nov 01 10:53:33 no-preload-548708 kubelet[769]: I1101 10:53:33.259234     769 scope.go:117] "RemoveContainer" containerID="ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae"
	Nov 01 10:53:33 no-preload-548708 kubelet[769]: E1101 10:53:33.259467     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:53:33 no-preload-548708 kubelet[769]: I1101 10:53:33.294839     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9drd" podStartSLOduration=11.435244563 podStartE2EDuration="24.294822471s" podCreationTimestamp="2025-11-01 10:53:09 +0000 UTC" firstStartedPulling="2025-11-01 10:53:10.165923781 +0000 UTC m=+14.709478724" lastFinishedPulling="2025-11-01 10:53:23.025501689 +0000 UTC m=+27.569056632" observedRunningTime="2025-11-01 10:53:23.244620013 +0000 UTC m=+27.788174955" watchObservedRunningTime="2025-11-01 10:53:33.294822471 +0000 UTC m=+37.838377414"
	Nov 01 10:53:37 no-preload-548708 kubelet[769]: I1101 10:53:37.271459     769 scope.go:117] "RemoveContainer" containerID="8880cc0aa44ad7c73eacefbffb811b0a869e18784d7193a9c59efd28558a6c37"
	Nov 01 10:53:40 no-preload-548708 kubelet[769]: I1101 10:53:40.104555     769 scope.go:117] "RemoveContainer" containerID="ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae"
	Nov 01 10:53:40 no-preload-548708 kubelet[769]: E1101 10:53:40.104749     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:53:54 no-preload-548708 kubelet[769]: I1101 10:53:54.916107     769 scope.go:117] "RemoveContainer" containerID="ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae"
	Nov 01 10:53:55 no-preload-548708 kubelet[769]: I1101 10:53:55.326962     769 scope.go:117] "RemoveContainer" containerID="ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae"
	Nov 01 10:53:55 no-preload-548708 kubelet[769]: I1101 10:53:55.327867     769 scope.go:117] "RemoveContainer" containerID="6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089"
	Nov 01 10:53:55 no-preload-548708 kubelet[769]: E1101 10:53:55.328651     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:54:00 no-preload-548708 kubelet[769]: I1101 10:54:00.108320     769 scope.go:117] "RemoveContainer" containerID="6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089"
	Nov 01 10:54:00 no-preload-548708 kubelet[769]: E1101 10:54:00.108677     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:54:02 no-preload-548708 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:54:02 no-preload-548708 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:54:02 no-preload-548708 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [def31cf7c49fbf2f7792ca869ef727ba0840aa7fc1d1f37c7800d617e02e98cc] <==
	2025/11/01 10:53:23 Using namespace: kubernetes-dashboard
	2025/11/01 10:53:23 Using in-cluster config to connect to apiserver
	2025/11/01 10:53:23 Using secret token for csrf signing
	2025/11/01 10:53:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:53:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:53:23 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:53:23 Generating JWE encryption key
	2025/11/01 10:53:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:53:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:53:24 Initializing JWE encryption key from synchronized object
	2025/11/01 10:53:24 Creating in-cluster Sidecar client
	2025/11/01 10:53:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:53:24 Serving insecurely on HTTP port: 9090
	2025/11/01 10:53:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:53:23 Starting overwatch
	
	
	==> storage-provisioner [8880cc0aa44ad7c73eacefbffb811b0a869e18784d7193a9c59efd28558a6c37] <==
	I1101 10:53:07.128026       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:53:37.131543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ebe2d6e71d49987999115a6dbf899bb298ed040585a1bb35ed5195ebc4afd3c3] <==
	I1101 10:53:37.404577       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:53:37.404688       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:53:37.407989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:40.864640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:45.131135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:48.729896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:51.783321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:54.806075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:54.823154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:53:54.823418       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:53:54.825387       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-548708_ed2856cd-f051-4ce2-8079-2870def3734a!
	I1101 10:53:54.830163       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5d04e54d-f042-48f8-95f5-aa02f6c4b764", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-548708_ed2856cd-f051-4ce2-8079-2870def3734a became leader
	W1101 10:53:54.830400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:54.839284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:53:54.930314       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-548708_ed2856cd-f051-4ce2-8079-2870def3734a!
	W1101 10:53:56.842714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:56.847666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:58.851520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:58.856198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:00.859294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:00.866330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:02.869634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:02.875369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:04.879381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:04.891832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-548708 -n no-preload-548708
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-548708 -n no-preload-548708: exit status 2 (398.152208ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-548708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-548708
helpers_test.go:243: (dbg) docker inspect no-preload-548708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e",
	        "Created": "2025-11-01T10:51:12.134501468Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501620,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:52:46.461409069Z",
	            "FinishedAt": "2025-11-01T10:52:45.404965141Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/hostname",
	        "HostsPath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/hosts",
	        "LogPath": "/var/lib/docker/containers/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e/965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e-json.log",
	        "Name": "/no-preload-548708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-548708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-548708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "965e3c07903f81cc1b3a504900b4dfad3aff2051626e40f99926a2adc3f3070e",
	                "LowerDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958-init/diff:/var/lib/docker/overlay2/5b3cb2ea9ab086a87c0915918cc57a46569999341bf2e5561daf254574213077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67cb4ce96f5caf7a46c4355864088c486608aace9a625743e19421bc67481958/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-548708",
	                "Source": "/var/lib/docker/volumes/no-preload-548708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-548708",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-548708",
	                "name.minikube.sigs.k8s.io": "no-preload-548708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3dba102ad32c44dc2bf32f97bb168c2ce96dd02241da396057c14cabb2c31d0",
	            "SandboxKey": "/var/run/docker/netns/c3dba102ad32",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-548708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:36:9a:0d:0f:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "458d9289c1e4678d575d4635bc902fe82bbd4c6f42dd0c954078044d50841590",
	                    "EndpointID": "f6695f291e1e7f52b0b944b72e169fe955e3ad875bda491578ea2f75996a375a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-548708",
	                        "965e3c07903f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-548708 -n no-preload-548708
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-548708 -n no-preload-548708: exit status 2 (361.836204ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-548708 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-548708 logs -n 25: (1.397473527s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-014050 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:50 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p default-k8s-diff-port-014050                                                                                                                                                                                                               │ default-k8s-diff-port-014050 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p disable-driver-mounts-514829                                                                                                                                                                                                               │ disable-driver-mounts-514829 │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:52 UTC │
	│ image   │ embed-certs-499088 image list --format=json                                                                                                                                                                                                   │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ pause   │ -p embed-certs-499088 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ delete  │ -p embed-certs-499088                                                                                                                                                                                                                         │ embed-certs-499088           │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ start   │ -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-548708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ stop    │ -p no-preload-548708 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable metrics-server -p newest-cni-196911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ stop    │ -p newest-cni-196911 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable dashboard -p newest-cni-196911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ start   │ -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ enable dashboard -p no-preload-548708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ start   │ -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:53 UTC │
	│ image   │ newest-cni-196911 image list --format=json                                                                                                                                                                                                    │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ pause   │ -p newest-cni-196911 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ delete  │ -p newest-cni-196911                                                                                                                                                                                                                          │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ delete  │ -p newest-cni-196911                                                                                                                                                                                                                          │ newest-cni-196911            │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ start   │ -p auto-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-883951                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │                     │
	│ image   │ no-preload-548708 image list --format=json                                                                                                                                                                                                    │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:54 UTC │ 01 Nov 25 10:54 UTC │
	│ pause   │ -p no-preload-548708 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-548708            │ jenkins │ v1.37.0 │ 01 Nov 25 10:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:53:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:53:11.381389  505282 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:53:11.381699  505282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:53:11.381782  505282 out.go:374] Setting ErrFile to fd 2...
	I1101 10:53:11.381893  505282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:53:11.382519  505282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:53:11.383154  505282 out.go:368] Setting JSON to false
	I1101 10:53:11.384212  505282 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9343,"bootTime":1761985048,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:53:11.384309  505282 start.go:143] virtualization:  
	I1101 10:53:11.388174  505282 out.go:179] * [auto-883951] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:53:11.391336  505282 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:53:11.391402  505282 notify.go:221] Checking for updates...
	I1101 10:53:11.399114  505282 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:53:11.402172  505282 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:53:11.405666  505282 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:53:11.408698  505282 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:53:11.411697  505282 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:53:11.415525  505282 config.go:182] Loaded profile config "no-preload-548708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:53:11.415700  505282 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:53:11.461015  505282 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:53:11.461159  505282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:53:11.571448  505282 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:11.559552999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:53:11.571550  505282 docker.go:319] overlay module found
	I1101 10:53:11.575132  505282 out.go:179] * Using the docker driver based on user configuration
	I1101 10:53:11.578081  505282 start.go:309] selected driver: docker
	I1101 10:53:11.578106  505282 start.go:930] validating driver "docker" against <nil>
	I1101 10:53:11.578121  505282 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:53:11.578837  505282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:53:11.678205  505282 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:11.663918825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:53:11.678553  505282 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:53:11.678890  505282 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:53:11.681990  505282 out.go:179] * Using Docker driver with root privileges
	I1101 10:53:11.685017  505282 cni.go:84] Creating CNI manager for ""
	I1101 10:53:11.685085  505282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:53:11.685093  505282 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:53:11.685182  505282 start.go:353] cluster config:
	{Name:auto-883951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-883951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1101 10:53:11.688404  505282 out.go:179] * Starting "auto-883951" primary control-plane node in "auto-883951" cluster
	I1101 10:53:11.691226  505282 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:53:11.694197  505282 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:53:11.697060  505282 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:53:11.697132  505282 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:53:11.697161  505282 cache.go:59] Caching tarball of preloaded images
	I1101 10:53:11.697260  505282 preload.go:233] Found /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:53:11.697269  505282 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:53:11.697373  505282 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/config.json ...
	I1101 10:53:11.697390  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/config.json: {Name:mk3308d6ffdcf444ab84398b5bcc995b81908c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:11.697530  505282 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:53:11.718557  505282 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:53:11.718576  505282 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:53:11.718590  505282 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:53:11.718612  505282 start.go:360] acquireMachinesLock for auto-883951: {Name:mkaa0972c90dc13698f55ed05c022d37ae86426e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:53:11.718705  505282 start.go:364] duration metric: took 77.096µs to acquireMachinesLock for "auto-883951"
	I1101 10:53:11.718755  505282 start.go:93] Provisioning new machine with config: &{Name:auto-883951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-883951 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:53:11.718830  505282 start.go:125] createHost starting for "" (driver="docker")
	W1101 10:53:12.395506  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:14.396502  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:11.722433  505282 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:53:11.722680  505282 start.go:159] libmachine.API.Create for "auto-883951" (driver="docker")
	I1101 10:53:11.722717  505282 client.go:173] LocalClient.Create starting
	I1101 10:53:11.722802  505282 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem
	I1101 10:53:11.722836  505282 main.go:143] libmachine: Decoding PEM data...
	I1101 10:53:11.722853  505282 main.go:143] libmachine: Parsing certificate...
	I1101 10:53:11.722914  505282 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem
	I1101 10:53:11.722945  505282 main.go:143] libmachine: Decoding PEM data...
	I1101 10:53:11.722954  505282 main.go:143] libmachine: Parsing certificate...
	I1101 10:53:11.723306  505282 cli_runner.go:164] Run: docker network inspect auto-883951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:53:11.752461  505282 cli_runner.go:211] docker network inspect auto-883951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:53:11.752549  505282 network_create.go:284] running [docker network inspect auto-883951] to gather additional debugging logs...
	I1101 10:53:11.752567  505282 cli_runner.go:164] Run: docker network inspect auto-883951
	W1101 10:53:11.769644  505282 cli_runner.go:211] docker network inspect auto-883951 returned with exit code 1
	I1101 10:53:11.769688  505282 network_create.go:287] error running [docker network inspect auto-883951]: docker network inspect auto-883951: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-883951 not found
	I1101 10:53:11.769711  505282 network_create.go:289] output of [docker network inspect auto-883951]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-883951 not found
	
	** /stderr **
	I1101 10:53:11.769969  505282 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:53:11.786799  505282 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5e2665991a3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:25:1a:f9:12:ec} reservation:<nil>}
	I1101 10:53:11.787184  505282 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-adecbbb769f0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:b0:b5:2e:4c:30} reservation:<nil>}
	I1101 10:53:11.787419  505282 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2077d26d1806 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:49:68:b6:9e:fb} reservation:<nil>}
	I1101 10:53:11.787870  505282 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7680}
	I1101 10:53:11.787896  505282 network_create.go:124] attempt to create docker network auto-883951 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 10:53:11.787952  505282 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-883951 auto-883951
	I1101 10:53:11.851033  505282 network_create.go:108] docker network auto-883951 192.168.76.0/24 created
	I1101 10:53:11.851067  505282 kic.go:121] calculated static IP "192.168.76.2" for the "auto-883951" container
	I1101 10:53:11.851165  505282 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:53:11.867744  505282 cli_runner.go:164] Run: docker volume create auto-883951 --label name.minikube.sigs.k8s.io=auto-883951 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:53:11.891329  505282 oci.go:103] Successfully created a docker volume auto-883951
	I1101 10:53:11.891422  505282 cli_runner.go:164] Run: docker run --rm --name auto-883951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-883951 --entrypoint /usr/bin/test -v auto-883951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:53:12.748020  505282 oci.go:107] Successfully prepared a docker volume auto-883951
	I1101 10:53:12.748067  505282 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:53:12.748086  505282 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:53:12.748170  505282 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-883951:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 10:53:16.396605  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:18.409697  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:20.894807  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:17.829021  505282 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-883951:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.080811105s)
	I1101 10:53:17.829056  505282 kic.go:203] duration metric: took 5.08096661s to extract preloaded images to volume ...
	W1101 10:53:17.829207  505282 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:53:17.829330  505282 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:53:17.946497  505282 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-883951 --name auto-883951 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-883951 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-883951 --network auto-883951 --ip 192.168.76.2 --volume auto-883951:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:53:18.400000  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Running}}
	I1101 10:53:18.431137  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:18.457653  505282 cli_runner.go:164] Run: docker exec auto-883951 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:53:18.530190  505282 oci.go:144] the created container "auto-883951" has a running status.
	I1101 10:53:18.530216  505282 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa...
	I1101 10:53:19.690783  505282 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:53:19.719084  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:19.747896  505282 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:53:19.747914  505282 kic_runner.go:114] Args: [docker exec --privileged auto-883951 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:53:19.836854  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:19.865231  505282 machine.go:94] provisionDockerMachine start ...
	I1101 10:53:19.865330  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:19.888152  505282 main.go:143] libmachine: Using SSH client type: native
	I1101 10:53:19.888493  505282 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1101 10:53:19.888509  505282 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:53:19.889225  505282 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1101 10:53:22.895825  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:24.899773  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:23.063130  505282 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-883951
	
	I1101 10:53:23.063152  505282 ubuntu.go:182] provisioning hostname "auto-883951"
	I1101 10:53:23.063310  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:23.091608  505282 main.go:143] libmachine: Using SSH client type: native
	I1101 10:53:23.091957  505282 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1101 10:53:23.091972  505282 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-883951 && echo "auto-883951" | sudo tee /etc/hostname
	I1101 10:53:23.271045  505282 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-883951
	
	I1101 10:53:23.271205  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:23.323807  505282 main.go:143] libmachine: Using SSH client type: native
	I1101 10:53:23.324424  505282 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1101 10:53:23.324450  505282 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-883951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-883951/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-883951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:53:23.526870  505282 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:53:23.526957  505282 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-292445/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-292445/.minikube}
	I1101 10:53:23.527025  505282 ubuntu.go:190] setting up certificates
	I1101 10:53:23.527063  505282 provision.go:84] configureAuth start
	I1101 10:53:23.527170  505282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-883951
	I1101 10:53:23.551990  505282 provision.go:143] copyHostCerts
	I1101 10:53:23.552072  505282 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem, removing ...
	I1101 10:53:23.552082  505282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem
	I1101 10:53:23.552178  505282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/cert.pem (1123 bytes)
	I1101 10:53:23.552280  505282 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem, removing ...
	I1101 10:53:23.552286  505282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem
	I1101 10:53:23.552312  505282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/key.pem (1679 bytes)
	I1101 10:53:23.552372  505282 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem, removing ...
	I1101 10:53:23.552385  505282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem
	I1101 10:53:23.552410  505282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-292445/.minikube/ca.pem (1082 bytes)
	I1101 10:53:23.552623  505282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem org=jenkins.auto-883951 san=[127.0.0.1 192.168.76.2 auto-883951 localhost minikube]
	I1101 10:53:24.704827  505282 provision.go:177] copyRemoteCerts
	I1101 10:53:24.704898  505282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:53:24.704966  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:24.722518  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:24.829026  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:53:24.849263  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:53:24.868153  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:53:24.885928  505282 provision.go:87] duration metric: took 1.35882751s to configureAuth
	I1101 10:53:24.885961  505282 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:53:24.886142  505282 config.go:182] Loaded profile config "auto-883951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:53:24.886238  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:24.906334  505282 main.go:143] libmachine: Using SSH client type: native
	I1101 10:53:24.906645  505282 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1101 10:53:24.906666  505282 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:53:25.181106  505282 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:53:25.181136  505282 machine.go:97] duration metric: took 5.31588771s to provisionDockerMachine
	I1101 10:53:25.181146  505282 client.go:176] duration metric: took 13.45842325s to LocalClient.Create
	I1101 10:53:25.181163  505282 start.go:167] duration metric: took 13.458484764s to libmachine.API.Create "auto-883951"
	I1101 10:53:25.181171  505282 start.go:293] postStartSetup for "auto-883951" (driver="docker")
	I1101 10:53:25.181183  505282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:53:25.181244  505282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:53:25.181296  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:25.198905  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:25.309108  505282 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:53:25.312457  505282 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:53:25.312488  505282 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:53:25.312500  505282 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/addons for local assets ...
	I1101 10:53:25.312560  505282 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-292445/.minikube/files for local assets ...
	I1101 10:53:25.312650  505282 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem -> 2942882.pem in /etc/ssl/certs
	I1101 10:53:25.312773  505282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:53:25.320290  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:53:25.338492  505282 start.go:296] duration metric: took 157.304556ms for postStartSetup
	I1101 10:53:25.338912  505282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-883951
	I1101 10:53:25.356731  505282 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/config.json ...
	I1101 10:53:25.357176  505282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:53:25.357232  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:25.374325  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:25.477983  505282 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:53:25.482699  505282 start.go:128] duration metric: took 13.763850155s to createHost
	I1101 10:53:25.482728  505282 start.go:83] releasing machines lock for "auto-883951", held for 13.764015343s
	I1101 10:53:25.482841  505282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-883951
	I1101 10:53:25.499495  505282 ssh_runner.go:195] Run: cat /version.json
	I1101 10:53:25.499508  505282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:53:25.499547  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:25.499575  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:25.516070  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:25.533282  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:25.722617  505282 ssh_runner.go:195] Run: systemctl --version
	I1101 10:53:25.729032  505282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:53:25.779784  505282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:53:25.785062  505282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:53:25.785138  505282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:53:25.813669  505282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:53:25.813736  505282 start.go:496] detecting cgroup driver to use...
	I1101 10:53:25.813804  505282 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:53:25.813872  505282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:53:25.832226  505282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:53:25.845315  505282 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:53:25.845433  505282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:53:25.863213  505282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:53:25.883470  505282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:53:26.014073  505282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:53:26.145935  505282 docker.go:234] disabling docker service ...
	I1101 10:53:26.146002  505282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:53:26.169568  505282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:53:26.183552  505282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:53:26.307742  505282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:53:26.436855  505282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:53:26.451481  505282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:53:26.473638  505282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:53:26.473756  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.483849  505282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:53:26.483969  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.493918  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.502735  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.515085  505282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:53:26.524007  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.533215  505282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.547439  505282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:53:26.556516  505282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:53:26.564701  505282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:53:26.572598  505282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:53:26.692883  505282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:53:27.154692  505282 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:53:27.154763  505282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:53:27.159135  505282 start.go:564] Will wait 60s for crictl version
	I1101 10:53:27.159200  505282 ssh_runner.go:195] Run: which crictl
	I1101 10:53:27.163486  505282 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:53:27.189797  505282 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:53:27.189881  505282 ssh_runner.go:195] Run: crio --version
	I1101 10:53:27.219285  505282 ssh_runner.go:195] Run: crio --version
	I1101 10:53:27.252795  505282 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:53:27.255889  505282 cli_runner.go:164] Run: docker network inspect auto-883951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:53:27.271754  505282 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:53:27.275357  505282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:53:27.284778  505282 kubeadm.go:884] updating cluster {Name:auto-883951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-883951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:53:27.284900  505282 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:53:27.285002  505282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:53:27.317318  505282 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:53:27.317344  505282 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:53:27.317399  505282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:53:27.342935  505282 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:53:27.342961  505282 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:53:27.342969  505282 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:53:27.343095  505282 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-883951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-883951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:53:27.343184  505282 ssh_runner.go:195] Run: crio config
	I1101 10:53:27.405482  505282 cni.go:84] Creating CNI manager for ""
	I1101 10:53:27.405502  505282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:53:27.405537  505282 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:53:27.405571  505282 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-883951 NodeName:auto-883951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:53:27.405703  505282 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-883951"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:53:27.405776  505282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:53:27.413878  505282 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:53:27.413946  505282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:53:27.423838  505282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1101 10:53:27.438623  505282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:53:27.455352  505282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1101 10:53:27.470023  505282 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:53:27.474199  505282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:53:27.484071  505282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:53:27.598023  505282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:53:27.617471  505282 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951 for IP: 192.168.76.2
	I1101 10:53:27.617498  505282 certs.go:195] generating shared ca certs ...
	I1101 10:53:27.617514  505282 certs.go:227] acquiring lock for ca certs: {Name:mk3df4e063325d73738735b31503f59f3c799837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:27.617733  505282 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key
	I1101 10:53:27.617797  505282 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key
	I1101 10:53:27.617811  505282 certs.go:257] generating profile certs ...
	I1101 10:53:27.617883  505282 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.key
	I1101 10:53:27.617905  505282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.crt with IP's: []
	I1101 10:53:27.934966  505282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.crt ...
	I1101 10:53:27.934998  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.crt: {Name:mk13aa5637adee1bd3e03dd5586cbdc587a4c079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:27.935219  505282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.key ...
	I1101 10:53:27.935234  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.key: {Name:mk18e4bfc275e6f061acd7a655bde8aa84398d1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:27.935333  505282 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key.56036f0b
	I1101 10:53:27.935353  505282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt.56036f0b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:53:28.722042  505282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt.56036f0b ...
	I1101 10:53:28.722077  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt.56036f0b: {Name:mk108523cd1464e39ecc54dd12f9048e449b70c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:28.722263  505282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key.56036f0b ...
	I1101 10:53:28.722280  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key.56036f0b: {Name:mkc4500c3de9b25b1d6ccae4d40bfe72eb961be9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:28.722370  505282 certs.go:382] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt.56036f0b -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt
	I1101 10:53:28.722459  505282 certs.go:386] copying /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key.56036f0b -> /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key
	I1101 10:53:28.722521  505282 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.key
	I1101 10:53:28.722541  505282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.crt with IP's: []
	I1101 10:53:29.398818  505282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.crt ...
	I1101 10:53:29.398849  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.crt: {Name:mkaa5658e3814c8033310ab2247b745a7c1e815b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:29.399027  505282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.key ...
	I1101 10:53:29.399040  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.key: {Name:mk268237e69825433f47c260896b2e64739f75a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:29.399232  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem (1338 bytes)
	W1101 10:53:29.399289  505282 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288_empty.pem, impossibly tiny 0 bytes
	I1101 10:53:29.399303  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:53:29.399327  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:53:29.399352  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:53:29.399380  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/certs/key.pem (1679 bytes)
	I1101 10:53:29.399427  505282 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem (1708 bytes)
	I1101 10:53:29.400065  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:53:29.418754  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:53:29.439190  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:53:29.457608  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:53:29.478788  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1101 10:53:29.496644  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:53:29.514823  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:53:29.532851  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:53:29.552153  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/certs/294288.pem --> /usr/share/ca-certificates/294288.pem (1338 bytes)
	I1101 10:53:29.570919  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/ssl/certs/2942882.pem --> /usr/share/ca-certificates/2942882.pem (1708 bytes)
	I1101 10:53:29.588755  505282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:53:29.607019  505282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:53:29.619816  505282 ssh_runner.go:195] Run: openssl version
	I1101 10:53:29.626135  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2942882.pem && ln -fs /usr/share/ca-certificates/2942882.pem /etc/ssl/certs/2942882.pem"
	I1101 10:53:29.634371  505282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2942882.pem
	I1101 10:53:29.637874  505282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:54 /usr/share/ca-certificates/2942882.pem
	I1101 10:53:29.637938  505282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2942882.pem
	I1101 10:53:29.679390  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2942882.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:53:29.688027  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:53:29.696629  505282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:53:29.700480  505282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:47 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:53:29.700546  505282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:53:29.741679  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:53:29.750276  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294288.pem && ln -fs /usr/share/ca-certificates/294288.pem /etc/ssl/certs/294288.pem"
	I1101 10:53:29.758556  505282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294288.pem
	I1101 10:53:29.762500  505282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:54 /usr/share/ca-certificates/294288.pem
	I1101 10:53:29.762595  505282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294288.pem
	I1101 10:53:29.803515  505282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294288.pem /etc/ssl/certs/51391683.0"
	I1101 10:53:29.812412  505282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:53:29.815972  505282 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:53:29.816031  505282 kubeadm.go:401] StartCluster: {Name:auto-883951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-883951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:53:29.816110  505282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:53:29.816169  505282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:53:29.845324  505282 cri.go:89] found id: ""
	I1101 10:53:29.845465  505282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:53:29.854085  505282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:53:29.862092  505282 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:53:29.862188  505282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:53:29.869930  505282 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:53:29.869948  505282 kubeadm.go:158] found existing configuration files:
	
	I1101 10:53:29.870000  505282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:53:29.877649  505282 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:53:29.877736  505282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:53:29.885116  505282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:53:29.907124  505282 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:53:29.907211  505282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:53:29.916147  505282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:53:29.924882  505282 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:53:29.925048  505282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:53:29.933154  505282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:53:29.941813  505282 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:53:29.941949  505282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:53:29.950711  505282 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:53:29.999009  505282 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:53:29.999120  505282 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:53:30.081178  505282 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:53:30.081275  505282 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:53:30.081330  505282 kubeadm.go:319] OS: Linux
	I1101 10:53:30.081380  505282 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:53:30.081452  505282 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:53:30.081523  505282 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:53:30.081586  505282 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:53:30.081646  505282 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:53:30.081703  505282 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:53:30.081756  505282 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:53:30.081814  505282 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:53:30.081869  505282 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:53:30.164836  505282 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:53:30.165000  505282 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:53:30.165100  505282 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:53:30.174396  505282 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1101 10:53:27.395926  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:29.906941  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:30.181398  505282 out.go:252]   - Generating certificates and keys ...
	I1101 10:53:30.181513  505282 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:53:30.181588  505282 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:53:30.922619  505282 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1101 10:53:32.396133  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:34.401473  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:31.792687  505282 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:53:32.290477  505282 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:53:33.629044  505282 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:53:33.857140  505282 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:53:33.857566  505282 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-883951 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:53:35.170289  505282 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:53:35.170723  505282 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-883951 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:53:36.057361  505282 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:53:36.573999  505282 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:53:36.810692  505282 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:53:36.810963  505282 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:53:38.392250  505282 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:53:39.018660  505282 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:53:39.132302  505282 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:53:39.331768  505282 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:53:39.797907  505282 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:53:39.798774  505282 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:53:39.801505  505282 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1101 10:53:36.898170  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:38.900785  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:39.805014  505282 out.go:252]   - Booting up control plane ...
	I1101 10:53:39.805129  505282 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:53:39.805211  505282 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:53:39.805281  505282 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:53:39.820267  505282 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:53:39.820599  505282 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:53:39.828517  505282 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:53:39.828851  505282 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:53:39.829045  505282 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:53:39.968614  505282 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:53:39.968743  505282 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1101 10:53:41.396219  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:43.894238  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	W1101 10:53:45.895457  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:42.469859  505282 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.501448538s
	I1101 10:53:42.473317  505282 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:53:42.473416  505282 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 10:53:42.473697  505282 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:53:42.473789  505282 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:53:46.379937  505282 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.906043271s
	I1101 10:53:47.602248  505282 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.128868083s
	I1101 10:53:48.975611  505282 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502085025s
	I1101 10:53:48.997799  505282 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:53:49.021168  505282 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:53:49.041694  505282 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:53:49.041915  505282 kubeadm.go:319] [mark-control-plane] Marking the node auto-883951 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:53:49.056005  505282 kubeadm.go:319] [bootstrap-token] Using token: q7rxpo.fz9zwcghff9yrobk
	W1101 10:53:47.895947  501404 pod_ready.go:104] pod "coredns-66bc5c9577-dt2gw" is not "Ready", error: <nil>
	I1101 10:53:48.394818  501404 pod_ready.go:94] pod "coredns-66bc5c9577-dt2gw" is "Ready"
	I1101 10:53:48.394895  501404 pod_ready.go:86] duration metric: took 40.505805583s for pod "coredns-66bc5c9577-dt2gw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.398016  501404 pod_ready.go:83] waiting for pod "etcd-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.403309  501404 pod_ready.go:94] pod "etcd-no-preload-548708" is "Ready"
	I1101 10:53:48.403347  501404 pod_ready.go:86] duration metric: took 5.262269ms for pod "etcd-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.405754  501404 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.410790  501404 pod_ready.go:94] pod "kube-apiserver-no-preload-548708" is "Ready"
	I1101 10:53:48.410818  501404 pod_ready.go:86] duration metric: took 5.035747ms for pod "kube-apiserver-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.413439  501404 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.592201  501404 pod_ready.go:94] pod "kube-controller-manager-no-preload-548708" is "Ready"
	I1101 10:53:48.592229  501404 pod_ready.go:86] duration metric: took 178.758801ms for pod "kube-controller-manager-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:48.795386  501404 pod_ready.go:83] waiting for pod "kube-proxy-m7vxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:49.193143  501404 pod_ready.go:94] pod "kube-proxy-m7vxc" is "Ready"
	I1101 10:53:49.193221  501404 pod_ready.go:86] duration metric: took 397.760273ms for pod "kube-proxy-m7vxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:49.393166  501404 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:49.792465  501404 pod_ready.go:94] pod "kube-scheduler-no-preload-548708" is "Ready"
	I1101 10:53:49.792493  501404 pod_ready.go:86] duration metric: took 399.298978ms for pod "kube-scheduler-no-preload-548708" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:53:49.792507  501404 pod_ready.go:40] duration metric: took 41.910354523s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:53:49.908060  501404 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:53:49.910530  501404 out.go:179] * Done! kubectl is now configured to use "no-preload-548708" cluster and "default" namespace by default
	I1101 10:53:49.059204  505282 out.go:252]   - Configuring RBAC rules ...
	I1101 10:53:49.059341  505282 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:53:49.070439  505282 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:53:49.090133  505282 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:53:49.095267  505282 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:53:49.101893  505282 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:53:49.106634  505282 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:53:49.385180  505282 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:53:49.862842  505282 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:53:50.383126  505282 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:53:50.384225  505282 kubeadm.go:319] 
	I1101 10:53:50.384309  505282 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:53:50.384316  505282 kubeadm.go:319] 
	I1101 10:53:50.384396  505282 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:53:50.384401  505282 kubeadm.go:319] 
	I1101 10:53:50.384428  505282 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:53:50.384490  505282 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:53:50.384542  505282 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:53:50.384565  505282 kubeadm.go:319] 
	I1101 10:53:50.384622  505282 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:53:50.384626  505282 kubeadm.go:319] 
	I1101 10:53:50.384677  505282 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:53:50.384681  505282 kubeadm.go:319] 
	I1101 10:53:50.384736  505282 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:53:50.384814  505282 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:53:50.384886  505282 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:53:50.384890  505282 kubeadm.go:319] 
	I1101 10:53:50.385085  505282 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:53:50.385168  505282 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:53:50.385172  505282 kubeadm.go:319] 
	I1101 10:53:50.385260  505282 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token q7rxpo.fz9zwcghff9yrobk \
	I1101 10:53:50.385368  505282 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 \
	I1101 10:53:50.385389  505282 kubeadm.go:319] 	--control-plane 
	I1101 10:53:50.385394  505282 kubeadm.go:319] 
	I1101 10:53:50.385482  505282 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:53:50.385487  505282 kubeadm.go:319] 
	I1101 10:53:50.385573  505282 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token q7rxpo.fz9zwcghff9yrobk \
	I1101 10:53:50.385686  505282 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4d8e4ef2cfbd8a0cca12b16e25431027d8449e00a0bc32981cf7291a1c52c2c5 
	I1101 10:53:50.390899  505282 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:53:50.391134  505282 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:53:50.391242  505282 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:53:50.391258  505282 cni.go:84] Creating CNI manager for ""
	I1101 10:53:50.391265  505282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:53:50.394586  505282 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:53:50.397540  505282 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:53:50.402135  505282 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:53:50.402161  505282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:53:50.433365  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:53:51.195925  505282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:53:51.195984  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:51.196068  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-883951 minikube.k8s.io/updated_at=2025_11_01T10_53_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=auto-883951 minikube.k8s.io/primary=true
	I1101 10:53:51.368372  505282 ops.go:34] apiserver oom_adj: -16
	I1101 10:53:51.368567  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:51.869476  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:52.368915  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:52.869319  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:53.369494  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:53.868620  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:54.368734  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:54.869380  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:55.369084  505282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:53:55.536562  505282 kubeadm.go:1114] duration metric: took 4.340630441s to wait for elevateKubeSystemPrivileges
	I1101 10:53:55.536587  505282 kubeadm.go:403] duration metric: took 25.720559924s to StartCluster
	I1101 10:53:55.536604  505282 settings.go:142] acquiring lock: {Name:mk0d052e23e14be15fce2e46fc126903822ef051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:55.536658  505282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:53:55.537626  505282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/kubeconfig: {Name:mke1f2159e7f1167d5023dd5a0b20d5caec4e226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:53:55.538814  505282 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:53:55.538914  505282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:53:55.539161  505282 config.go:182] Loaded profile config "auto-883951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:53:55.539190  505282 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:53:55.539245  505282 addons.go:70] Setting storage-provisioner=true in profile "auto-883951"
	I1101 10:53:55.539271  505282 addons.go:239] Setting addon storage-provisioner=true in "auto-883951"
	I1101 10:53:55.539293  505282 host.go:66] Checking if "auto-883951" exists ...
	I1101 10:53:55.539785  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:55.540179  505282 addons.go:70] Setting default-storageclass=true in profile "auto-883951"
	I1101 10:53:55.540198  505282 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-883951"
	I1101 10:53:55.540458  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:55.547795  505282 out.go:179] * Verifying Kubernetes components...
	I1101 10:53:55.552979  505282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:53:55.597712  505282 addons.go:239] Setting addon default-storageclass=true in "auto-883951"
	I1101 10:53:55.597749  505282 host.go:66] Checking if "auto-883951" exists ...
	I1101 10:53:55.598160  505282 cli_runner.go:164] Run: docker container inspect auto-883951 --format={{.State.Status}}
	I1101 10:53:55.603567  505282 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:53:55.607903  505282 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:53:55.607926  505282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:53:55.607996  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:55.638703  505282 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:53:55.638724  505282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:53:55.638785  505282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-883951
	I1101 10:53:55.655025  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:55.701005  505282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/auto-883951/id_rsa Username:docker}
	I1101 10:53:55.910716  505282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:53:55.910896  505282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:53:55.964165  505282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:53:55.973516  505282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:53:56.516226  505282 node_ready.go:35] waiting up to 15m0s for node "auto-883951" to be "Ready" ...
	I1101 10:53:56.517420  505282 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 10:53:56.817103  505282 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:53:56.819881  505282 addons.go:515] duration metric: took 1.280669833s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:53:57.022346  505282 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-883951" context rescaled to 1 replicas
	W1101 10:53:58.519543  505282 node_ready.go:57] node "auto-883951" has "Ready":"False" status (will retry)
	W1101 10:54:00.520480  505282 node_ready.go:57] node "auto-883951" has "Ready":"False" status (will retry)
	W1101 10:54:03.020088  505282 node_ready.go:57] node "auto-883951" has "Ready":"False" status (will retry)
	W1101 10:54:05.020170  505282 node_ready.go:57] node "auto-883951" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.961140031Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.973333375Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.973504051Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.973581393Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.97876586Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.9789266Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.97900555Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.986870132Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.987038552Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.987119242Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.991567226Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:53:46 no-preload-548708 crio[649]: time="2025-11-01T10:53:46.991738993Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.916856778Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8226e75c-0f30-4008-889a-e4fa69c02ebc name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.918901679Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=97dfd1d2-9a29-4a3d-9fd0-df8c41518515 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.925059028Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s/dashboard-metrics-scraper" id=3e03f308-051c-419f-af70-0c5459a8c5e2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.925187825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.941971915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.942966049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.967433862Z" level=info msg="Created container 6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s/dashboard-metrics-scraper" id=3e03f308-051c-419f-af70-0c5459a8c5e2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.969824095Z" level=info msg="Starting container: 6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089" id=9bc98c11-1729-41a1-96d7-80c8f915007c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:53:54 no-preload-548708 crio[649]: time="2025-11-01T10:53:54.977135334Z" level=info msg="Started container" PID=1716 containerID=6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s/dashboard-metrics-scraper id=9bc98c11-1729-41a1-96d7-80c8f915007c name=/runtime.v1.RuntimeService/StartContainer sandboxID=46e462c88591a8bbb801262bea0f7df07b98dc3d81d3bbc818b021b0f0be3239
	Nov 01 10:53:54 no-preload-548708 conmon[1714]: conmon 6695b8916a5bee6a5e76 <ninfo>: container 1716 exited with status 1
	Nov 01 10:53:55 no-preload-548708 crio[649]: time="2025-11-01T10:53:55.328999755Z" level=info msg="Removing container: ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae" id=a76f82d8-e21a-4e70-896a-d673f5203534 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:53:55 no-preload-548708 crio[649]: time="2025-11-01T10:53:55.336348229Z" level=info msg="Error loading conmon cgroup of container ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae: cgroup deleted" id=a76f82d8-e21a-4e70-896a-d673f5203534 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:53:55 no-preload-548708 crio[649]: time="2025-11-01T10:53:55.341326138Z" level=info msg="Removed container ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s/dashboard-metrics-scraper" id=a76f82d8-e21a-4e70-896a-d673f5203534 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6695b8916a5be       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago       Exited              dashboard-metrics-scraper   3                   46e462c88591a       dashboard-metrics-scraper-6ffb444bf9-g6j6s   kubernetes-dashboard
	ebe2d6e71d499       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           29 seconds ago       Running             storage-provisioner         2                   b848c8163d274       storage-provisioner                          kube-system
	def31cf7c49fb       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   73927acb1e0fb       kubernetes-dashboard-855c9754f9-l9drd        kubernetes-dashboard
	12f9f2ae75614       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   9fc36b0988d83       coredns-66bc5c9577-dt2gw                     kube-system
	c4615627c25fa       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   93ae9d9482d4f       busybox                                      default
	8880cc0aa44ad       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           About a minute ago   Exited              storage-provisioner         1                   b848c8163d274       storage-provisioner                          kube-system
	31026d42f589e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   6a7322e173754       kindnet-mwwlc                                kube-system
	8d15cf2b7e132       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   fd55fa544ff54       kube-proxy-m7vxc                             kube-system
	21b6a3d81852a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   da710e44631e3       kube-apiserver-no-preload-548708             kube-system
	4d7c8dba98a18       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   fe26f2fb3c1d0       kube-scheduler-no-preload-548708             kube-system
	f5f4bd6b7426c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   50501cfc8b1e4       kube-controller-manager-no-preload-548708    kube-system
	1d6ce9e953a8b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   603065add993f       etcd-no-preload-548708                       kube-system
	
	
	==> coredns [12f9f2ae7561486cf3a5cf5e25b0238244bb53590abd5eceab13baaaf91bbfc5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49918 - 43746 "HINFO IN 7306926777048817396.1324493943256063088. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011213708s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-548708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-548708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=no-preload-548708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_51_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:51:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-548708
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:53:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:53:56 +0000   Sat, 01 Nov 2025 10:51:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:53:56 +0000   Sat, 01 Nov 2025 10:51:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:53:56 +0000   Sat, 01 Nov 2025 10:51:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:53:56 +0000   Sat, 01 Nov 2025 10:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-548708
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0c3a0660-5fd6-454c-a1ce-cbee363950c2
	  Boot ID:                    323f6c58-9970-4b3a-91da-7194cd29149a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 coredns-66bc5c9577-dt2gw                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m5s
	  kube-system                 etcd-no-preload-548708                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m10s
	  kube-system                 kindnet-mwwlc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m5s
	  kube-system                 kube-apiserver-no-preload-548708              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-no-preload-548708     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-proxy-m7vxc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-scheduler-no-preload-548708              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g6j6s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-l9drd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m2s                   kube-proxy       
	  Normal   Starting                 59s                    kube-proxy       
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m21s (x8 over 2m21s)  kubelet          Node no-preload-548708 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m21s (x8 over 2m21s)  kubelet          Node no-preload-548708 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m21s (x8 over 2m21s)  kubelet          Node no-preload-548708 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m10s                  kubelet          Node no-preload-548708 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s                  kubelet          Node no-preload-548708 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m10s                  kubelet          Node no-preload-548708 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m6s                   node-controller  Node no-preload-548708 event: Registered Node no-preload-548708 in Controller
	  Normal   NodeReady                108s                   kubelet          Node no-preload-548708 status is now: NodeReady
	  Normal   Starting                 72s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 72s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 72s)      kubelet          Node no-preload-548708 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 72s)      kubelet          Node no-preload-548708 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 72s)      kubelet          Node no-preload-548708 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node no-preload-548708 event: Registered Node no-preload-548708 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.903928] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[ +31.459219] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[  +0.965289] overlayfs: idmapped layers are currently not supported
	[ +39.711904] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:48] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[ +42.559605] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:50] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:51] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:52] overlayfs: idmapped layers are currently not supported
	[ +26.480177] overlayfs: idmapped layers are currently not supported
	[  +9.079378] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1d6ce9e953a8b3c836603bef290e36c2eae37f5508055cd9ebe57279220b4715] <==
	{"level":"warn","ts":"2025-11-01T10:53:02.649550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:02.854134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:02.855547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:02.945802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:02.989339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:02.994969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.051771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.088195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.107408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.145307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.190534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.245515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.271737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.303969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.386744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.406070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.470226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.501273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.531210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.553485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.618187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.629309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.645717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.670319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:53:03.736678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48922","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:54:07 up  2:36,  0 user,  load average: 5.58, 4.65, 3.42
	Linux no-preload-548708 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [31026d42f589e36ffbd94fb6e3033d7d6cf0ed9de81d4521fc55197785d8b107] <==
	I1101 10:53:06.633899       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:53:06.634597       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:53:06.635071       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:53:06.635089       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:53:06.635104       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:53:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:53:06.952390       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:53:06.952410       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:53:06.952440       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:53:06.952803       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:53:36.950032       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:53:36.952492       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:53:36.953764       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:53:36.957205       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:53:38.153375       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:53:38.153437       1 metrics.go:72] Registering metrics
	I1101 10:53:38.154275       1 controller.go:711] "Syncing nftables rules"
	I1101 10:53:46.954412       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:53:46.954533       1 main.go:301] handling current node
	I1101 10:53:56.950133       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:53:56.950169       1 main.go:301] handling current node
	I1101 10:54:06.955286       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:54:06.955324       1 main.go:301] handling current node
	
	
	==> kube-apiserver [21b6a3d81852a5fbef2e31f92ee373c1322e58d33d0a4c6198b4f9654e688b41] <==
	I1101 10:53:05.081760       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:53:05.081836       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:53:05.081877       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:53:05.082038       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:53:05.096529       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:53:05.096719       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:53:05.096788       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:53:05.128459       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:53:05.143229       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:53:05.143679       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:53:05.143761       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:53:05.143809       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:53:05.276639       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:53:05.353514       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:53:05.593414       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:53:05.811580       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:53:07.295047       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:53:07.490394       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:53:07.559865       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:53:07.586584       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:53:07.715274       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.1.226"}
	I1101 10:53:07.750413       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.191.31"}
	I1101 10:53:09.565316       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:53:09.916557       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:53:09.956178       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f5f4bd6b7426cda5e69e50ee4f6e6167b783e0bd20ec2f2ea8043896373ef992] <==
	I1101 10:53:09.471453       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:53:09.471527       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:53:09.471742       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:53:09.480077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:53:09.479989       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:53:09.480145       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:53:09.480404       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:53:09.480439       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:53:09.480736       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:53:09.480768       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:53:09.490371       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:53:09.495383       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:53:09.496442       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:53:09.496503       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:53:09.496537       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:53:09.497689       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:53:09.497703       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:53:09.497780       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:53:09.497936       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-548708"
	I1101 10:53:09.498008       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:53:09.500533       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:53:09.501760       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:53:09.503949       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:53:09.508489       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:53:09.513787       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	
	
	==> kube-proxy [8d15cf2b7e1327dad8ab5a10c985a4b55630ff084d152dd39f5ad16057f2347f] <==
	I1101 10:53:06.712942       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:53:07.280177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:53:07.383647       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:53:07.383698       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:53:07.383767       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:53:07.634487       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:53:07.635280       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:53:07.651096       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:53:07.651811       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:53:07.651869       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:53:07.653851       1 config.go:200] "Starting service config controller"
	I1101 10:53:07.653916       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:53:07.653960       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:53:07.654005       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:53:07.654049       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:53:07.654088       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:53:07.654860       1 config.go:309] "Starting node config controller"
	I1101 10:53:07.654914       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:53:07.654922       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:53:07.754379       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:53:07.754423       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:53:07.754463       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4d7c8dba98a1808a309fd3d7927f59223183ac53462318916d991ce724a3d765] <==
	I1101 10:53:01.033700       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:53:05.177435       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:53:05.177468       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:53:05.177478       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:53:05.177485       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:53:05.343655       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:53:05.349055       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:53:05.356256       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:53:05.356300       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:53:05.361032       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:53:05.361175       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:53:05.459057       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:53:17 no-preload-548708 kubelet[769]: I1101 10:53:17.159146     769 scope.go:117] "RemoveContainer" containerID="205c0e47fde7f0695f45bdce6e05f761bb4cb942c28d9c5a6d8777272719618b"
	Nov 01 10:53:18 no-preload-548708 kubelet[769]: I1101 10:53:18.163923     769 scope.go:117] "RemoveContainer" containerID="205c0e47fde7f0695f45bdce6e05f761bb4cb942c28d9c5a6d8777272719618b"
	Nov 01 10:53:18 no-preload-548708 kubelet[769]: I1101 10:53:18.164208     769 scope.go:117] "RemoveContainer" containerID="6a1a2239089a5daf7e5e7ada43c6f0fed1efa6e285e41fea56567319a8a2e8a6"
	Nov 01 10:53:18 no-preload-548708 kubelet[769]: E1101 10:53:18.164355     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:53:19 no-preload-548708 kubelet[769]: I1101 10:53:19.214528     769 scope.go:117] "RemoveContainer" containerID="6a1a2239089a5daf7e5e7ada43c6f0fed1efa6e285e41fea56567319a8a2e8a6"
	Nov 01 10:53:19 no-preload-548708 kubelet[769]: E1101 10:53:19.214696     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:53:20 no-preload-548708 kubelet[769]: I1101 10:53:20.220017     769 scope.go:117] "RemoveContainer" containerID="6a1a2239089a5daf7e5e7ada43c6f0fed1efa6e285e41fea56567319a8a2e8a6"
	Nov 01 10:53:20 no-preload-548708 kubelet[769]: E1101 10:53:20.220196     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:53:32 no-preload-548708 kubelet[769]: I1101 10:53:32.916472     769 scope.go:117] "RemoveContainer" containerID="6a1a2239089a5daf7e5e7ada43c6f0fed1efa6e285e41fea56567319a8a2e8a6"
	Nov 01 10:53:33 no-preload-548708 kubelet[769]: I1101 10:53:33.258607     769 scope.go:117] "RemoveContainer" containerID="6a1a2239089a5daf7e5e7ada43c6f0fed1efa6e285e41fea56567319a8a2e8a6"
	Nov 01 10:53:33 no-preload-548708 kubelet[769]: I1101 10:53:33.259234     769 scope.go:117] "RemoveContainer" containerID="ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae"
	Nov 01 10:53:33 no-preload-548708 kubelet[769]: E1101 10:53:33.259467     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:53:33 no-preload-548708 kubelet[769]: I1101 10:53:33.294839     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9drd" podStartSLOduration=11.435244563 podStartE2EDuration="24.294822471s" podCreationTimestamp="2025-11-01 10:53:09 +0000 UTC" firstStartedPulling="2025-11-01 10:53:10.165923781 +0000 UTC m=+14.709478724" lastFinishedPulling="2025-11-01 10:53:23.025501689 +0000 UTC m=+27.569056632" observedRunningTime="2025-11-01 10:53:23.244620013 +0000 UTC m=+27.788174955" watchObservedRunningTime="2025-11-01 10:53:33.294822471 +0000 UTC m=+37.838377414"
	Nov 01 10:53:37 no-preload-548708 kubelet[769]: I1101 10:53:37.271459     769 scope.go:117] "RemoveContainer" containerID="8880cc0aa44ad7c73eacefbffb811b0a869e18784d7193a9c59efd28558a6c37"
	Nov 01 10:53:40 no-preload-548708 kubelet[769]: I1101 10:53:40.104555     769 scope.go:117] "RemoveContainer" containerID="ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae"
	Nov 01 10:53:40 no-preload-548708 kubelet[769]: E1101 10:53:40.104749     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:53:54 no-preload-548708 kubelet[769]: I1101 10:53:54.916107     769 scope.go:117] "RemoveContainer" containerID="ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae"
	Nov 01 10:53:55 no-preload-548708 kubelet[769]: I1101 10:53:55.326962     769 scope.go:117] "RemoveContainer" containerID="ce2097bbe6ea9d0675afe1c3dbb04cceb0dd8a9838271f659fd0759aac2f1fae"
	Nov 01 10:53:55 no-preload-548708 kubelet[769]: I1101 10:53:55.327867     769 scope.go:117] "RemoveContainer" containerID="6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089"
	Nov 01 10:53:55 no-preload-548708 kubelet[769]: E1101 10:53:55.328651     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:54:00 no-preload-548708 kubelet[769]: I1101 10:54:00.108320     769 scope.go:117] "RemoveContainer" containerID="6695b8916a5bee6a5e762d34c90d770eb745b1cd463bf8f53b651438045ea089"
	Nov 01 10:54:00 no-preload-548708 kubelet[769]: E1101 10:54:00.108677     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6j6s_kubernetes-dashboard(00bfe3c5-7ebb-40ea-9445-13eb4766054f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6j6s" podUID="00bfe3c5-7ebb-40ea-9445-13eb4766054f"
	Nov 01 10:54:02 no-preload-548708 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:54:02 no-preload-548708 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:54:02 no-preload-548708 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [def31cf7c49fbf2f7792ca869ef727ba0840aa7fc1d1f37c7800d617e02e98cc] <==
	2025/11/01 10:53:23 Starting overwatch
	2025/11/01 10:53:23 Using namespace: kubernetes-dashboard
	2025/11/01 10:53:23 Using in-cluster config to connect to apiserver
	2025/11/01 10:53:23 Using secret token for csrf signing
	2025/11/01 10:53:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:53:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:53:23 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:53:23 Generating JWE encryption key
	2025/11/01 10:53:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:53:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:53:24 Initializing JWE encryption key from synchronized object
	2025/11/01 10:53:24 Creating in-cluster Sidecar client
	2025/11/01 10:53:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:53:24 Serving insecurely on HTTP port: 9090
	2025/11/01 10:53:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8880cc0aa44ad7c73eacefbffb811b0a869e18784d7193a9c59efd28558a6c37] <==
	I1101 10:53:07.128026       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:53:37.131543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ebe2d6e71d49987999115a6dbf899bb298ed040585a1bb35ed5195ebc4afd3c3] <==
	W1101 10:53:37.407989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:40.864640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:45.131135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:48.729896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:51.783321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:54.806075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:54.823154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:53:54.823418       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:53:54.825387       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-548708_ed2856cd-f051-4ce2-8079-2870def3734a!
	I1101 10:53:54.830163       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5d04e54d-f042-48f8-95f5-aa02f6c4b764", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-548708_ed2856cd-f051-4ce2-8079-2870def3734a became leader
	W1101 10:53:54.830400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:54.839284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:53:54.930314       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-548708_ed2856cd-f051-4ce2-8079-2870def3734a!
	W1101 10:53:56.842714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:56.847666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:58.851520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:53:58.856198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:00.859294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:00.866330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:02.869634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:02.875369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:04.879381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:04.891832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:06.901660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:06.910801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-548708 -n no-preload-548708
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-548708 -n no-preload-548708: exit status 2 (400.122179ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-548708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.61s)
E1101 10:59:49.936076  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:59:56.287781  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:00.214930  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:03.676389  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:06.210371  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:20.712221  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/auto-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (260/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.84
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.01
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.14
18 TestDownloadOnly/v1.34.1/DeleteAll 0.32
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.23
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 170.07
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.84
48 TestAddons/StoppedEnableDisable 12.46
49 TestCertOptions 42.88
50 TestCertExpiration 249.32
52 TestForceSystemdFlag 35.89
53 TestForceSystemdEnv 40.45
58 TestErrorSpam/setup 33.32
59 TestErrorSpam/start 0.84
60 TestErrorSpam/status 1.09
61 TestErrorSpam/pause 5.7
62 TestErrorSpam/unpause 5.27
63 TestErrorSpam/stop 1.53
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 82.59
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 28.35
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.53
75 TestFunctional/serial/CacheCmd/cache/add_local 1.08
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 37.28
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.8
86 TestFunctional/serial/LogsFileCmd 1.61
87 TestFunctional/serial/InvalidService 4.04
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 11.04
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.28
93 TestFunctional/parallel/StatusCmd 1.34
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 23.49
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.14
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 2.34
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
113 TestFunctional/parallel/License 0.38
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 0.96
116 TestFunctional/parallel/ImageCommands/ImageListShort 1.69
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.6
121 TestFunctional/parallel/ImageCommands/Setup 0.66
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.59
129 TestFunctional/parallel/ProfileCmd/profile_list 0.52
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.37
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/MountCmd/any-port 7.13
148 TestFunctional/parallel/MountCmd/specific-port 1.76
149 TestFunctional/parallel/MountCmd/VerifyCleanup 2.3
150 TestFunctional/parallel/ServiceCmd/List 0.61
151 TestFunctional/parallel/ServiceCmd/JSONOutput 0.65
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 207.01
163 TestMultiControlPlane/serial/DeployApp 7.05
164 TestMultiControlPlane/serial/PingHostFromPods 1.63
165 TestMultiControlPlane/serial/AddWorkerNode 60.81
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
168 TestMultiControlPlane/serial/CopyFile 20.37
169 TestMultiControlPlane/serial/StopSecondaryNode 12.88
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.87
171 TestMultiControlPlane/serial/RestartSecondaryNode 21.17
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.11
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 120.69
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.82
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.18
177 TestMultiControlPlane/serial/RestartCluster 74.39
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.82
179 TestMultiControlPlane/serial/AddSecondaryNode 82.65
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
185 TestJSONOutput/start/Command 81.13
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.88
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 38.7
211 TestKicCustomNetwork/use_default_bridge_network 37.47
212 TestKicExistingNetwork 39.9
213 TestKicCustomSubnet 38.84
214 TestKicStaticIP 36.4
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 77.75
219 TestMountStart/serial/StartWithMountFirst 9.69
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 10.08
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.29
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 8.75
227 TestMountStart/serial/VerifyMountPostStop 0.34
230 TestMultiNode/serial/FreshStart2Nodes 140.13
231 TestMultiNode/serial/DeployApp2Nodes 5.28
232 TestMultiNode/serial/PingHostFrom2Pods 0.93
233 TestMultiNode/serial/AddNode 60.48
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.77
236 TestMultiNode/serial/CopyFile 10.5
237 TestMultiNode/serial/StopNode 2.43
238 TestMultiNode/serial/StartAfterStop 8.23
239 TestMultiNode/serial/RestartKeepsNodes 76.53
240 TestMultiNode/serial/DeleteNode 5.71
241 TestMultiNode/serial/StopMultiNode 24.5
242 TestMultiNode/serial/RestartMultiNode 51.87
243 TestMultiNode/serial/ValidateNameConflict 37.58
248 TestPreload 128.62
250 TestScheduledStopUnix 113.56
253 TestInsufficientStorage 13.62
254 TestRunningBinaryUpgrade 54.05
256 TestKubernetesUpgrade 367.04
257 TestMissingContainerUpgrade 122.89
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 51.88
261 TestNoKubernetes/serial/StartWithStopK8s 9.05
262 TestNoKubernetes/serial/Start 10.23
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
264 TestNoKubernetes/serial/ProfileList 0.69
265 TestNoKubernetes/serial/Stop 1.31
266 TestNoKubernetes/serial/StartNoArgs 7.97
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
268 TestStoppedBinaryUpgrade/Setup 0.68
269 TestStoppedBinaryUpgrade/Upgrade 60.9
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
279 TestPause/serial/Start 81.8
280 TestPause/serial/SecondStartNoReconfiguration 26.93
289 TestNetworkPlugins/group/false 5.92
294 TestStartStop/group/old-k8s-version/serial/FirstStart 62.31
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.44
297 TestStartStop/group/old-k8s-version/serial/Stop 12.02
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
299 TestStartStop/group/old-k8s-version/serial/SecondStart 47.07
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.84
307 TestStartStop/group/embed-certs/serial/FirstStart 81.56
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.33
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.05
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.1
313 TestStartStop/group/embed-certs/serial/DeployApp 9.35
315 TestStartStop/group/embed-certs/serial/Stop 12.04
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
317 TestStartStop/group/embed-certs/serial/SecondStart 47.79
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.4
323 TestStartStop/group/no-preload/serial/FirstStart 71.18
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
329 TestStartStop/group/newest-cni/serial/FirstStart 44.77
330 TestStartStop/group/no-preload/serial/DeployApp 8.41
332 TestStartStop/group/no-preload/serial/Stop 12.37
333 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/Stop 1.34
336 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
337 TestStartStop/group/newest-cni/serial/SecondStart 19.91
338 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
339 TestStartStop/group/no-preload/serial/SecondStart 64.41
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
344 TestNetworkPlugins/group/auto/Start 87.64
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
349 TestNetworkPlugins/group/kindnet/Start 79.47
350 TestNetworkPlugins/group/auto/KubeletFlags 0.41
351 TestNetworkPlugins/group/auto/NetCatPod 12.38
352 TestNetworkPlugins/group/auto/DNS 0.2
353 TestNetworkPlugins/group/auto/Localhost 0.21
354 TestNetworkPlugins/group/auto/HairPin 0.14
355 TestNetworkPlugins/group/calico/Start 67.78
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
358 TestNetworkPlugins/group/kindnet/NetCatPod 11.36
359 TestNetworkPlugins/group/kindnet/DNS 0.22
360 TestNetworkPlugins/group/kindnet/Localhost 0.18
361 TestNetworkPlugins/group/kindnet/HairPin 0.15
362 TestNetworkPlugins/group/custom-flannel/Start 66.17
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.36
365 TestNetworkPlugins/group/calico/NetCatPod 11.45
366 TestNetworkPlugins/group/calico/DNS 0.19
367 TestNetworkPlugins/group/calico/Localhost 0.18
368 TestNetworkPlugins/group/calico/HairPin 0.18
369 TestNetworkPlugins/group/enable-default-cni/Start 86.3
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.36
372 TestNetworkPlugins/group/custom-flannel/DNS 0.21
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
375 TestNetworkPlugins/group/flannel/Start 61.96
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.52
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
381 TestNetworkPlugins/group/flannel/ControllerPod 6
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
383 TestNetworkPlugins/group/bridge/Start 83.6
384 TestNetworkPlugins/group/flannel/NetCatPod 12.38
385 TestNetworkPlugins/group/flannel/DNS 0.21
386 TestNetworkPlugins/group/flannel/Localhost 0.16
387 TestNetworkPlugins/group/flannel/HairPin 0.17
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
389 TestNetworkPlugins/group/bridge/NetCatPod 9.33
390 TestNetworkPlugins/group/bridge/DNS 0.15
391 TestNetworkPlugins/group/bridge/Localhost 0.13
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (5.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-633552 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-633552 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.841828298s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 09:47:05.756173  294288 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 09:47:05.756259  294288 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-633552
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-633552: exit status 85 (91.379819ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-633552 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-633552 │ jenkins │ v1.37.0 │ 01 Nov 25 09:46 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:46:59
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:46:59.961060  294293 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:46:59.961224  294293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:46:59.961256  294293 out.go:374] Setting ErrFile to fd 2...
	I1101 09:46:59.961277  294293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:46:59.961556  294293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	W1101 09:46:59.961718  294293 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21832-292445/.minikube/config/config.json: open /home/jenkins/minikube-integration/21832-292445/.minikube/config/config.json: no such file or directory
	I1101 09:46:59.962148  294293 out.go:368] Setting JSON to true
	I1101 09:46:59.963053  294293 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5372,"bootTime":1761985048,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 09:46:59.963152  294293 start.go:143] virtualization:  
	I1101 09:46:59.967196  294293 out.go:99] [download-only-633552] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1101 09:46:59.967408  294293 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 09:46:59.967491  294293 notify.go:221] Checking for updates...
	I1101 09:46:59.970360  294293 out.go:171] MINIKUBE_LOCATION=21832
	I1101 09:46:59.973363  294293 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:46:59.976407  294293 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 09:46:59.979265  294293 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 09:46:59.982117  294293 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 09:46:59.987711  294293 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:46:59.987967  294293 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:47:00.039470  294293 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:47:00.039622  294293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:47:00.245501  294293 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 09:47:00.217992824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:47:00.245632  294293 docker.go:319] overlay module found
	I1101 09:47:00.249426  294293 out.go:99] Using the docker driver based on user configuration
	I1101 09:47:00.249480  294293 start.go:309] selected driver: docker
	I1101 09:47:00.249488  294293 start.go:930] validating driver "docker" against <nil>
	I1101 09:47:00.249611  294293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:47:00.332233  294293 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 09:47:00.308627046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:47:00.332429  294293 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:47:00.333170  294293 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 09:47:00.333370  294293 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:47:00.336968  294293 out.go:171] Using Docker driver with root privileges
	I1101 09:47:00.340688  294293 cni.go:84] Creating CNI manager for ""
	I1101 09:47:00.341630  294293 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:47:00.341646  294293 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:47:00.341750  294293 start.go:353] cluster config:
	{Name:download-only-633552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-633552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:47:00.345677  294293 out.go:99] Starting "download-only-633552" primary control-plane node in "download-only-633552" cluster
	I1101 09:47:00.345726  294293 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:47:00.348914  294293 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:47:00.349005  294293 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:47:00.349060  294293 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:47:00.385137  294293 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:47:00.385363  294293 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:47:00.385466  294293 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:47:00.403228  294293 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 09:47:00.403257  294293 cache.go:59] Caching tarball of preloaded images
	I1101 09:47:00.403420  294293 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:47:00.419475  294293 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 09:47:00.419529  294293 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1101 09:47:00.509678  294293 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1101 09:47:00.509870  294293 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 09:47:03.702190  294293 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 09:47:03.702573  294293 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/download-only-633552/config.json ...
	I1101 09:47:03.702733  294293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/download-only-633552/config.json: {Name:mk565d77775657bcad8b3e2c3b60597c20344268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:47:03.702978  294293 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:47:03.703241  294293 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21832-292445/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-633552 host does not exist
	  To start a cluster, run: "minikube start -p download-only-633552"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-633552
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-046639 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-046639 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.012695968s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 09:47:10.230072  294288 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 09:47:10.230111  294288 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-292445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-046639
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-046639: exit status 85 (140.01791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-633552 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-633552 │ jenkins │ v1.37.0 │ 01 Nov 25 09:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ delete  │ -p download-only-633552                                                                                                                                                   │ download-only-633552 │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │ 01 Nov 25 09:47 UTC │
	│ start   │ -o=json --download-only -p download-only-046639 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-046639 │ jenkins │ v1.37.0 │ 01 Nov 25 09:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:47:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:47:06.261335  294492 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:47:06.261454  294492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:47:06.261491  294492 out.go:374] Setting ErrFile to fd 2...
	I1101 09:47:06.261504  294492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:47:06.261745  294492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 09:47:06.262145  294492 out.go:368] Setting JSON to true
	I1101 09:47:06.262938  294492 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5378,"bootTime":1761985048,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 09:47:06.263030  294492 start.go:143] virtualization:  
	I1101 09:47:06.266520  294492 out.go:99] [download-only-046639] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:47:06.266709  294492 notify.go:221] Checking for updates...
	I1101 09:47:06.269599  294492 out.go:171] MINIKUBE_LOCATION=21832
	I1101 09:47:06.272735  294492 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:47:06.275681  294492 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 09:47:06.278462  294492 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 09:47:06.281346  294492 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 09:47:06.287013  294492 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:47:06.287312  294492 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:47:06.317843  294492 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:47:06.317955  294492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:47:06.375528  294492 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-01 09:47:06.366656248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:47:06.375654  294492 docker.go:319] overlay module found
	I1101 09:47:06.378715  294492 out.go:99] Using the docker driver based on user configuration
	I1101 09:47:06.378761  294492 start.go:309] selected driver: docker
	I1101 09:47:06.378768  294492 start.go:930] validating driver "docker" against <nil>
	I1101 09:47:06.378882  294492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:47:06.441069  294492 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-01 09:47:06.432027058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:47:06.441230  294492 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:47:06.441512  294492 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 09:47:06.441683  294492 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:47:06.444896  294492 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-046639 host does not exist
	  To start a cluster, run: "minikube start -p download-only-046639"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-046639
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 09:47:12.090041  294288 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-569786 --alsologtostderr --binary-mirror http://127.0.0.1:45357 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-569786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-569786
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-714840
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-714840: exit status 85 (83.320119ms)

                                                
                                                
-- stdout --
	* Profile "addons-714840" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-714840"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-714840
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-714840: exit status 85 (69.788797ms)

                                                
                                                
-- stdout --
	* Profile "addons-714840" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-714840"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (170.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-714840 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-714840 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m50.069649171s)
--- PASS: TestAddons/Setup (170.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-714840 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-714840 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.84s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-714840 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-714840 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [05ad1c23-5edd-46e5-9b2d-0191b3a2c248] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [05ad1c23-5edd-46e5-9b2d-0191b3a2c248] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003671465s
addons_test.go:694: (dbg) Run:  kubectl --context addons-714840 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-714840 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-714840 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-714840 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-714840
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-714840: (12.178630079s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-714840
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-714840
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-714840
--- PASS: TestAddons/StoppedEnableDisable (12.46s)

                                                
                                    
x
+
TestCertOptions (42.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-186677 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1101 10:44:46.749560  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:03.677089  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:17.653080  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-186677 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (39.2374597s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-186677 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-186677 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-186677 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-186677" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-186677
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-186677: (2.702447827s)
--- PASS: TestCertOptions (42.88s)

                                                
                                    
x
+
TestCertExpiration (249.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-308600 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-308600 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (45.637941677s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-308600 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-308600 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.21200086s)
helpers_test.go:175: Cleaning up "cert-expiration-308600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-308600
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-308600: (2.470611467s)
--- PASS: TestCertExpiration (249.32s)

                                                
                                    
x
+
TestForceSystemdFlag (35.89s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-173920 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-173920 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.929403181s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-173920 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-173920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-173920
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-173920: (2.635617824s)
--- PASS: TestForceSystemdFlag (35.89s)

                                                
                                    
x
+
TestForceSystemdEnv (40.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-555657 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-555657 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.308675413s)
helpers_test.go:175: Cleaning up "force-systemd-env-555657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-555657
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-555657: (3.136913587s)
--- PASS: TestForceSystemdEnv (40.45s)

                                                
                                    
x
+
TestErrorSpam/setup (33.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-388358 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-388358 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-388358 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-388358 --driver=docker  --container-runtime=crio: (33.3171289s)
--- PASS: TestErrorSpam/setup (33.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (5.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 pause: exit status 80 (2.335321099s)

                                                
                                                
-- stdout --
	* Pausing node nospam-388358 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:54:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 pause: exit status 80 (1.564891438s)

                                                
                                                
-- stdout --
	* Pausing node nospam-388358 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:54:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 pause: exit status 80 (1.799918076s)

                                                
                                                
-- stdout --
	* Pausing node nospam-388358 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:54:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 unpause: exit status 80 (1.73981746s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-388358 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:54:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 unpause: exit status 80 (1.972116791s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-388358 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:54:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 unpause: exit status 80 (1.558751906s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-388358 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:54:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.27s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 stop: (1.320676398s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388358 --log_dir /tmp/nospam-388358 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21832-292445/.minikube/files/etc/test/nested/copy/294288/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-839033 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1101 09:55:03.676811  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:03.683368  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:03.694818  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:03.716272  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:03.757800  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:03.839332  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:04.000984  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:04.324533  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:04.966300  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:06.247626  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:08.810490  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:13.933016  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:24.175623  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:44.657022  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-839033 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m22.594644765s)
--- PASS: TestFunctional/serial/StartWithProxy (82.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 09:55:46.067849  294288 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-839033 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-839033 --alsologtostderr -v=8: (28.346558113s)
functional_test.go:678: soft start took 28.347052402s for "functional-839033" cluster.
I1101 09:56:14.414690  294288 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (28.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-839033 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-839033 cache add registry.k8s.io/pause:3.1: (1.212663472s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-839033 cache add registry.k8s.io/pause:3.3: (1.198227303s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-839033 cache add registry.k8s.io/pause:latest: (1.123170742s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-839033 /tmp/TestFunctionalserialCacheCmdcacheadd_local199082383/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 cache add minikube-local-cache-test:functional-839033
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 cache delete minikube-local-cache-test:functional-839033
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-839033
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (318.427795ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 kubectl -- --context functional-839033 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-839033 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-839033 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1101 09:56:25.619916  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-839033 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.279143942s)
functional_test.go:776: restart took 37.279248469s for "functional-839033" cluster.
I1101 09:56:59.106352  294288 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (37.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-839033 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-839033 logs: (1.799897606s)
--- PASS: TestFunctional/serial/LogsCmd (1.80s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 logs --file /tmp/TestFunctionalserialLogsFileCmd7893853/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-839033 logs --file /tmp/TestFunctionalserialLogsFileCmd7893853/001/logs.txt: (1.605717078s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-839033 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-839033
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-839033: exit status 115 (377.286271ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31306 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-839033 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 config get cpus: exit status 14 (83.557544ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 config get cpus: exit status 14 (80.440016ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-839033 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-839033 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 322085: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-839033 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-839033 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (202.514377ms)

                                                
                                                
-- stdout --
	* [functional-839033] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:07:28.085708  319533 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:07:28.085957  319533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:07:28.085992  319533 out.go:374] Setting ErrFile to fd 2...
	I1101 10:07:28.086014  319533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:07:28.086328  319533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:07:28.086760  319533 out.go:368] Setting JSON to false
	I1101 10:07:28.087731  319533 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6600,"bootTime":1761985048,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:07:28.087848  319533 start.go:143] virtualization:  
	I1101 10:07:28.091516  319533 out.go:179] * [functional-839033] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:07:28.095221  319533 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:07:28.095303  319533 notify.go:221] Checking for updates...
	I1101 10:07:28.101205  319533 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:07:28.104197  319533 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:07:28.107100  319533 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:07:28.110087  319533 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:07:28.113190  319533 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:07:28.116626  319533 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:07:28.117273  319533 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:07:28.151788  319533 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:07:28.151922  319533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:07:28.215880  319533 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:07:28.205811391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:07:28.215988  319533 docker.go:319] overlay module found
	I1101 10:07:28.219098  319533 out.go:179] * Using the docker driver based on existing profile
	I1101 10:07:28.221980  319533 start.go:309] selected driver: docker
	I1101 10:07:28.222001  319533 start.go:930] validating driver "docker" against &{Name:functional-839033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-839033 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:07:28.222116  319533 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:07:28.225573  319533 out.go:203] 
	W1101 10:07:28.228476  319533 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 10:07:28.231221  319533 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-839033 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-839033 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-839033 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (277.206755ms)

                                                
                                                
-- stdout --
	* [functional-839033] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:07:41.104218  321619 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:07:41.104469  321619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:07:41.104493  321619 out.go:374] Setting ErrFile to fd 2...
	I1101 10:07:41.104517  321619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:07:41.104988  321619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:07:41.105439  321619 out.go:368] Setting JSON to false
	I1101 10:07:41.106347  321619 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6613,"bootTime":1761985048,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:07:41.106461  321619 start.go:143] virtualization:  
	I1101 10:07:41.109807  321619 out.go:179] * [functional-839033] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1101 10:07:41.113943  321619 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:07:41.114016  321619 notify.go:221] Checking for updates...
	I1101 10:07:41.119811  321619 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:07:41.122700  321619 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:07:41.125582  321619 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:07:41.128644  321619 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:07:41.131509  321619 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:07:41.137146  321619 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:07:41.137855  321619 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:07:41.173044  321619 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:07:41.173183  321619 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:07:41.265418  321619 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:07:41.252492948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:07:41.265524  321619 docker.go:319] overlay module found
	I1101 10:07:41.268909  321619 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 10:07:41.272142  321619 start.go:309] selected driver: docker
	I1101 10:07:41.272174  321619 start.go:930] validating driver "docker" against &{Name:functional-839033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-839033 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:07:41.272274  321619 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:07:41.275839  321619 out.go:203] 
	W1101 10:07:41.278868  321619 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 10:07:41.281815  321619 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [862df5e5-49f4-4e53-af42-ab10150d24bb] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004312033s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-839033 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-839033 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-839033 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-839033 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [67c71415-3f3e-4fab-a2f4-d7024d12dc71] Pending
helpers_test.go:352: "sp-pod" [67c71415-3f3e-4fab-a2f4-d7024d12dc71] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [67c71415-3f3e-4fab-a2f4-d7024d12dc71] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003995223s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-839033 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-839033 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-839033 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6a42f8bb-1918-4d64-8d46-d1b9dc413b4f] Pending
helpers_test.go:352: "sp-pod" [6a42f8bb-1918-4d64-8d46-d1b9dc413b4f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003407646s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-839033 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh -n functional-839033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 cp functional-839033:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1556380574/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh -n functional-839033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh -n functional-839033 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/294288/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "sudo cat /etc/test/nested/copy/294288/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/294288.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "sudo cat /etc/ssl/certs/294288.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/294288.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "sudo cat /usr/share/ca-certificates/294288.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2942882.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "sudo cat /etc/ssl/certs/2942882.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2942882.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "sudo cat /usr/share/ca-certificates/2942882.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-839033 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 ssh "sudo systemctl is-active docker": exit status 1 (373.580398ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 ssh "sudo systemctl is-active containerd": exit status 1 (403.396723ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-arm64 -p functional-839033 image ls --format short --alsologtostderr: (1.6904035s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-839033 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-839033 image ls --format short --alsologtostderr:
I1101 10:07:47.507907  322693 out.go:360] Setting OutFile to fd 1 ...
I1101 10:07:47.508105  322693 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:07:47.508119  322693 out.go:374] Setting ErrFile to fd 2...
I1101 10:07:47.508126  322693 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:07:47.508665  322693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
I1101 10:07:47.509451  322693 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:07:47.509578  322693 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:07:47.510057  322693 cli_runner.go:164] Run: docker container inspect functional-839033 --format={{.State.Status}}
I1101 10:07:47.535597  322693 ssh_runner.go:195] Run: systemctl --version
I1101 10:07:47.535797  322693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
I1101 10:07:47.555977  322693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
I1101 10:07:47.663635  322693 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 10:07:49.124339  322693 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.460668624s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-839033 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ latest             │ 46fabdd7f288c │ 176MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-839033 image ls --format table --alsologtostderr:
I1101 10:07:52.647084  322983 out.go:360] Setting OutFile to fd 1 ...
I1101 10:07:52.647201  322983 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:07:52.647211  322983 out.go:374] Setting ErrFile to fd 2...
I1101 10:07:52.647217  322983 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:07:52.647665  322983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
I1101 10:07:52.648310  322983 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:07:52.648434  322983 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:07:52.648888  322983 cli_runner.go:164] Run: docker container inspect functional-839033 --format={{.State.Status}}
I1101 10:07:52.674385  322983 ssh_runner.go:195] Run: systemctl --version
I1101 10:07:52.674455  322983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
I1101 10:07:52.696990  322983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
I1101 10:07:52.816075  322983 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-839033 image ls --format json --alsologtostderr:
[{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b46108996944
9f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797","repoDigests":["docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee72
68ef851a6eb7c9cb9626d8035b08ba4424","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"176006680"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","re
poDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256
:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a
45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-839033 image ls --format json --alsologtostderr:
I1101 10:07:52.392692  322946 out.go:360] Setting OutFile to fd 1 ...
I1101 10:07:52.392809  322946 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:07:52.392815  322946 out.go:374] Setting ErrFile to fd 2...
I1101 10:07:52.392828  322946 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:07:52.393219  322946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
I1101 10:07:52.394162  322946 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:07:52.394310  322946 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:07:52.394964  322946 cli_runner.go:164] Run: docker container inspect functional-839033 --format={{.State.Status}}
I1101 10:07:52.415972  322946 ssh_runner.go:195] Run: systemctl --version
I1101 10:07:52.416034  322946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
I1101 10:07:52.435968  322946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
I1101 10:07:52.543481  322946 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-839033 image ls --format yaml --alsologtostderr:
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797
repoDigests:
- docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "176006680"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-839033 image ls --format yaml --alsologtostderr:
I1101 10:07:49.186389  322740 out.go:360] Setting OutFile to fd 1 ...
I1101 10:07:49.186556  322740 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:07:49.186586  322740 out.go:374] Setting ErrFile to fd 2...
I1101 10:07:49.186607  322740 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:07:49.186879  322740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
I1101 10:07:49.187512  322740 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:07:49.187691  322740 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:07:49.188186  322740 cli_runner.go:164] Run: docker container inspect functional-839033 --format={{.State.Status}}
I1101 10:07:49.213133  322740 ssh_runner.go:195] Run: systemctl --version
I1101 10:07:49.213182  322740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
I1101 10:07:49.246654  322740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
I1101 10:07:49.363540  322740 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 ssh pgrep buildkitd: exit status 1 (444.882025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image build -t localhost/my-image:functional-839033 testdata/build --alsologtostderr
2025/11/01 10:07:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-839033 image build -t localhost/my-image:functional-839033 testdata/build --alsologtostderr: (3.907472278s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-839033 image build -t localhost/my-image:functional-839033 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d310c24e1cf
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-839033
--> 276e94083f2
Successfully tagged localhost/my-image:functional-839033
276e94083f2678e306e3ee3296da7af2dd7d873fc0c33d3a1a1f3ceb3e3f638f
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-839033 image build -t localhost/my-image:functional-839033 testdata/build --alsologtostderr:
I1101 10:07:49.943324  322857 out.go:360] Setting OutFile to fd 1 ...
I1101 10:07:49.947163  322857 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:07:49.947182  322857 out.go:374] Setting ErrFile to fd 2...
I1101 10:07:49.947189  322857 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:07:49.947585  322857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
I1101 10:07:49.948278  322857 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:07:49.949066  322857 config.go:182] Loaded profile config "functional-839033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:07:49.949533  322857 cli_runner.go:164] Run: docker container inspect functional-839033 --format={{.State.Status}}
I1101 10:07:49.984806  322857 ssh_runner.go:195] Run: systemctl --version
I1101 10:07:49.984867  322857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-839033
I1101 10:07:50.022214  322857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/functional-839033/id_rsa Username:docker}
I1101 10:07:50.157287  322857 build_images.go:162] Building image from path: /tmp/build.2859613094.tar
I1101 10:07:50.157373  322857 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 10:07:50.167202  322857 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2859613094.tar
I1101 10:07:50.172445  322857 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2859613094.tar: stat -c "%s %y" /var/lib/minikube/build/build.2859613094.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2859613094.tar': No such file or directory
I1101 10:07:50.172480  322857 ssh_runner.go:362] scp /tmp/build.2859613094.tar --> /var/lib/minikube/build/build.2859613094.tar (3072 bytes)
I1101 10:07:50.196819  322857 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2859613094
I1101 10:07:50.207879  322857 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2859613094 -xf /var/lib/minikube/build/build.2859613094.tar
I1101 10:07:50.216608  322857 crio.go:315] Building image: /var/lib/minikube/build/build.2859613094
I1101 10:07:50.216686  322857 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-839033 /var/lib/minikube/build/build.2859613094 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1101 10:07:53.725546  322857 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-839033 /var/lib/minikube/build/build.2859613094 --cgroup-manager=cgroupfs: (3.508833906s)
I1101 10:07:53.725627  322857 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2859613094
I1101 10:07:53.733871  322857 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2859613094.tar
I1101 10:07:53.741612  322857 build_images.go:218] Built localhost/my-image:functional-839033 from /tmp/build.2859613094.tar
I1101 10:07:53.741649  322857 build_images.go:134] succeeded building to: functional-839033
I1101 10:07:53.741654  322857 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-839033
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "451.449735ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "70.461538ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "482.150793ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "108.986982ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image rm kicbase/echo-server:functional-839033 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-839033 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-839033 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-839033 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 318218: os: process already finished
helpers_test.go:519: unable to terminate pid 318043: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-839033 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-839033 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-839033 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [4887b573-3560-4da3-af3b-65cac4afc27a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [4887b573-3560-4da3-af3b-65cac4afc27a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003835625s
I1101 09:57:23.996790  294288 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-839033 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.71.197 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-839033 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-839033 /tmp/TestFunctionalparallelMountCmdany-port2806256234/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761991648492429224" to /tmp/TestFunctionalparallelMountCmdany-port2806256234/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761991648492429224" to /tmp/TestFunctionalparallelMountCmdany-port2806256234/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761991648492429224" to /tmp/TestFunctionalparallelMountCmdany-port2806256234/001/test-1761991648492429224
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.112718ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:07:28.854810  294288 retry.go:31] will retry after 673.348379ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 10:07 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 10:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 10:07 test-1761991648492429224
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh cat /mount-9p/test-1761991648492429224
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-839033 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d190dc43-8116-44c8-ac8e-1279ba46b36b] Pending
helpers_test.go:352: "busybox-mount" [d190dc43-8116-44c8-ac8e-1279ba46b36b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d190dc43-8116-44c8-ac8e-1279ba46b36b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d190dc43-8116-44c8-ac8e-1279ba46b36b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006188392s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-839033 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-839033 /tmp/TestFunctionalparallelMountCmdany-port2806256234/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-839033 /tmp/TestFunctionalparallelMountCmdspecific-port4190651505/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (345.913015ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:07:35.966813  294288 retry.go:31] will retry after 338.65511ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-839033 /tmp/TestFunctionalparallelMountCmdspecific-port4190651505/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 ssh "sudo umount -f /mount-9p": exit status 1 (286.699101ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-839033 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-839033 /tmp/TestFunctionalparallelMountCmdspecific-port4190651505/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-839033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup733645874/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-839033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup733645874/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-839033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup733645874/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-839033 ssh "findmnt -T" /mount1: exit status 1 (608.597959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:07:37.992562  294288 retry.go:31] will retry after 456.049242ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-839033 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-839033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup733645874/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-839033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup733645874/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-839033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup733645874/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-839033 service list -o json
functional_test.go:1504: Took "650.247599ms" to run "out/minikube-linux-arm64 -p functional-839033 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-839033
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-839033
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-839033
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1101 10:10:03.676997  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m26.095376441s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (207.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- rollout status deployment/busybox
E1101 10:11:26.745841  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 kubectl -- rollout status deployment/busybox: (4.022393767s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-2qph9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-dzfcn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-w89sp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-2qph9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-dzfcn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-w89sp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-2qph9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-dzfcn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-w89sp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-2qph9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-2qph9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-dzfcn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-dzfcn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-w89sp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 kubectl -- exec busybox-7b57f96db7-w89sp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 node add --alsologtostderr -v 5
E1101 10:12:14.588113  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:14.594527  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:14.605938  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:14.627416  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:14.668816  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:14.750240  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:14.911918  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:15.233565  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:15.874960  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:17.156369  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:19.718041  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:24.839901  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 node add --alsologtostderr -v 5: (59.730967283s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5: (1.07792471s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-369457 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.073071718s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 status --output json --alsologtostderr -v 5
E1101 10:12:35.081977  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 status --output json --alsologtostderr -v 5: (1.034259702s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp testdata/cp-test.txt ha-369457:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3449021857/001/cp-test_ha-369457.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457:/home/docker/cp-test.txt ha-369457-m02:/home/docker/cp-test_ha-369457_ha-369457-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m02 "sudo cat /home/docker/cp-test_ha-369457_ha-369457-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457:/home/docker/cp-test.txt ha-369457-m03:/home/docker/cp-test_ha-369457_ha-369457-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m03 "sudo cat /home/docker/cp-test_ha-369457_ha-369457-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457:/home/docker/cp-test.txt ha-369457-m04:/home/docker/cp-test_ha-369457_ha-369457-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m04 "sudo cat /home/docker/cp-test_ha-369457_ha-369457-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp testdata/cp-test.txt ha-369457-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3449021857/001/cp-test_ha-369457-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m02:/home/docker/cp-test.txt ha-369457:/home/docker/cp-test_ha-369457-m02_ha-369457.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457 "sudo cat /home/docker/cp-test_ha-369457-m02_ha-369457.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m02:/home/docker/cp-test.txt ha-369457-m03:/home/docker/cp-test_ha-369457-m02_ha-369457-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m03 "sudo cat /home/docker/cp-test_ha-369457-m02_ha-369457-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m02:/home/docker/cp-test.txt ha-369457-m04:/home/docker/cp-test_ha-369457-m02_ha-369457-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m04 "sudo cat /home/docker/cp-test_ha-369457-m02_ha-369457-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp testdata/cp-test.txt ha-369457-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3449021857/001/cp-test_ha-369457-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m03:/home/docker/cp-test.txt ha-369457:/home/docker/cp-test_ha-369457-m03_ha-369457.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457 "sudo cat /home/docker/cp-test_ha-369457-m03_ha-369457.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m03:/home/docker/cp-test.txt ha-369457-m02:/home/docker/cp-test_ha-369457-m03_ha-369457-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m02 "sudo cat /home/docker/cp-test_ha-369457-m03_ha-369457-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m03:/home/docker/cp-test.txt ha-369457-m04:/home/docker/cp-test_ha-369457-m03_ha-369457-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m04 "sudo cat /home/docker/cp-test_ha-369457-m03_ha-369457-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp testdata/cp-test.txt ha-369457-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3449021857/001/cp-test_ha-369457-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m04:/home/docker/cp-test.txt ha-369457:/home/docker/cp-test_ha-369457-m04_ha-369457.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457 "sudo cat /home/docker/cp-test_ha-369457-m04_ha-369457.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m04:/home/docker/cp-test.txt ha-369457-m02:/home/docker/cp-test_ha-369457-m04_ha-369457-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m02 "sudo cat /home/docker/cp-test_ha-369457-m04_ha-369457-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 cp ha-369457-m04:/home/docker/cp-test.txt ha-369457-m03:/home/docker/cp-test_ha-369457-m04_ha-369457-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 ssh -n ha-369457-m03 "sudo cat /home/docker/cp-test_ha-369457-m04_ha-369457-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 node stop m02 --alsologtostderr -v 5
E1101 10:12:55.563319  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 node stop m02 --alsologtostderr -v 5: (12.096131922s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5: exit status 7 (787.25621ms)

                                                
                                                
-- stdout --
	ha-369457
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-369457-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-369457-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-369457-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:13:07.034588  337741 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:13:07.034714  337741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:13:07.034725  337741 out.go:374] Setting ErrFile to fd 2...
	I1101 10:13:07.034730  337741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:13:07.035007  337741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:13:07.035248  337741 out.go:368] Setting JSON to false
	I1101 10:13:07.035282  337741 mustload.go:66] Loading cluster: ha-369457
	I1101 10:13:07.035368  337741 notify.go:221] Checking for updates...
	I1101 10:13:07.035761  337741 config.go:182] Loaded profile config "ha-369457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:13:07.035775  337741 status.go:174] checking status of ha-369457 ...
	I1101 10:13:07.036377  337741 cli_runner.go:164] Run: docker container inspect ha-369457 --format={{.State.Status}}
	I1101 10:13:07.064463  337741 status.go:371] ha-369457 host status = "Running" (err=<nil>)
	I1101 10:13:07.064484  337741 host.go:66] Checking if "ha-369457" exists ...
	I1101 10:13:07.064900  337741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-369457
	I1101 10:13:07.089718  337741 host.go:66] Checking if "ha-369457" exists ...
	I1101 10:13:07.090019  337741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:13:07.090076  337741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-369457
	I1101 10:13:07.110789  337741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/ha-369457/id_rsa Username:docker}
	I1101 10:13:07.231028  337741 ssh_runner.go:195] Run: systemctl --version
	I1101 10:13:07.239698  337741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:13:07.253827  337741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:13:07.313013  337741 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-01 10:13:07.303358152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:13:07.313576  337741 kubeconfig.go:125] found "ha-369457" server: "https://192.168.49.254:8443"
	I1101 10:13:07.313613  337741 api_server.go:166] Checking apiserver status ...
	I1101 10:13:07.313656  337741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:13:07.325491  337741 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1231/cgroup
	I1101 10:13:07.334488  337741 api_server.go:182] apiserver freezer: "8:freezer:/docker/95fc48a1d2d7f025860362d7e0e3f9cacf7c0f9c12e5ad248e3b21534d49e64e/crio/crio-2b77c24290278b9dad378008b09440ba247530a4c1d60ee772e32dde0cba9e7d"
	I1101 10:13:07.334570  337741 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/95fc48a1d2d7f025860362d7e0e3f9cacf7c0f9c12e5ad248e3b21534d49e64e/crio/crio-2b77c24290278b9dad378008b09440ba247530a4c1d60ee772e32dde0cba9e7d/freezer.state
	I1101 10:13:07.343421  337741 api_server.go:204] freezer state: "THAWED"
	I1101 10:13:07.343504  337741 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 10:13:07.351852  337741 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 10:13:07.351879  337741 status.go:463] ha-369457 apiserver status = Running (err=<nil>)
	I1101 10:13:07.351900  337741 status.go:176] ha-369457 status: &{Name:ha-369457 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:13:07.351922  337741 status.go:174] checking status of ha-369457-m02 ...
	I1101 10:13:07.352232  337741 cli_runner.go:164] Run: docker container inspect ha-369457-m02 --format={{.State.Status}}
	I1101 10:13:07.368994  337741 status.go:371] ha-369457-m02 host status = "Stopped" (err=<nil>)
	I1101 10:13:07.369017  337741 status.go:384] host is not running, skipping remaining checks
	I1101 10:13:07.369025  337741 status.go:176] ha-369457-m02 status: &{Name:ha-369457-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:13:07.369052  337741 status.go:174] checking status of ha-369457-m03 ...
	I1101 10:13:07.369372  337741 cli_runner.go:164] Run: docker container inspect ha-369457-m03 --format={{.State.Status}}
	I1101 10:13:07.385773  337741 status.go:371] ha-369457-m03 host status = "Running" (err=<nil>)
	I1101 10:13:07.385801  337741 host.go:66] Checking if "ha-369457-m03" exists ...
	I1101 10:13:07.386096  337741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-369457-m03
	I1101 10:13:07.403027  337741 host.go:66] Checking if "ha-369457-m03" exists ...
	I1101 10:13:07.403390  337741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:13:07.403443  337741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-369457-m03
	I1101 10:13:07.422684  337741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/ha-369457-m03/id_rsa Username:docker}
	I1101 10:13:07.526577  337741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:13:07.540693  337741 kubeconfig.go:125] found "ha-369457" server: "https://192.168.49.254:8443"
	I1101 10:13:07.540719  337741 api_server.go:166] Checking apiserver status ...
	I1101 10:13:07.540762  337741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:13:07.552136  337741 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1195/cgroup
	I1101 10:13:07.565881  337741 api_server.go:182] apiserver freezer: "8:freezer:/docker/fa57e5a2b1b3fad7b7c73ffcf6f27f5057b6de52a27a8f7c2457a9f0078d0596/crio/crio-db346e9733ee76ea4cf444787990d4c867e4b138dda75b30a8d78da3d1c7e158"
	I1101 10:13:07.565951  337741 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fa57e5a2b1b3fad7b7c73ffcf6f27f5057b6de52a27a8f7c2457a9f0078d0596/crio/crio-db346e9733ee76ea4cf444787990d4c867e4b138dda75b30a8d78da3d1c7e158/freezer.state
	I1101 10:13:07.573498  337741 api_server.go:204] freezer state: "THAWED"
	I1101 10:13:07.573528  337741 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 10:13:07.581954  337741 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 10:13:07.581983  337741 status.go:463] ha-369457-m03 apiserver status = Running (err=<nil>)
	I1101 10:13:07.581994  337741 status.go:176] ha-369457-m03 status: &{Name:ha-369457-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:13:07.582012  337741 status.go:174] checking status of ha-369457-m04 ...
	I1101 10:13:07.582317  337741 cli_runner.go:164] Run: docker container inspect ha-369457-m04 --format={{.State.Status}}
	I1101 10:13:07.600523  337741 status.go:371] ha-369457-m04 host status = "Running" (err=<nil>)
	I1101 10:13:07.600549  337741 host.go:66] Checking if "ha-369457-m04" exists ...
	I1101 10:13:07.600851  337741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-369457-m04
	I1101 10:13:07.617279  337741 host.go:66] Checking if "ha-369457-m04" exists ...
	I1101 10:13:07.617615  337741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:13:07.617671  337741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-369457-m04
	I1101 10:13:07.642389  337741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/ha-369457-m04/id_rsa Username:docker}
	I1101 10:13:07.746286  337741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:13:07.759660  337741 status.go:176] ha-369457-m04 status: &{Name:ha-369457-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 node start m02 --alsologtostderr -v 5: (19.677355718s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5: (1.373291043s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.114099889s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 stop --alsologtostderr -v 5
E1101 10:13:36.525057  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 stop --alsologtostderr -v 5: (26.742489797s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 start --wait true --alsologtostderr -v 5
E1101 10:14:58.446350  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:15:03.676787  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 start --wait true --alsologtostderr -v 5: (1m33.759920236s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 node delete m03 --alsologtostderr -v 5: (10.824739921s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 stop --alsologtostderr -v 5: (36.055033262s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5: exit status 7 (120.228ms)

                                                
                                                
-- stdout --
	ha-369457
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-369457-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-369457-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:16:20.333000  349506 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:16:20.333171  349506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:20.333203  349506 out.go:374] Setting ErrFile to fd 2...
	I1101 10:16:20.333224  349506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:20.333495  349506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:16:20.333722  349506 out.go:368] Setting JSON to false
	I1101 10:16:20.333797  349506 mustload.go:66] Loading cluster: ha-369457
	I1101 10:16:20.333859  349506 notify.go:221] Checking for updates...
	I1101 10:16:20.334256  349506 config.go:182] Loaded profile config "ha-369457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:20.334297  349506 status.go:174] checking status of ha-369457 ...
	I1101 10:16:20.334865  349506 cli_runner.go:164] Run: docker container inspect ha-369457 --format={{.State.Status}}
	I1101 10:16:20.354824  349506 status.go:371] ha-369457 host status = "Stopped" (err=<nil>)
	I1101 10:16:20.354845  349506 status.go:384] host is not running, skipping remaining checks
	I1101 10:16:20.354853  349506 status.go:176] ha-369457 status: &{Name:ha-369457 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:16:20.354884  349506 status.go:174] checking status of ha-369457-m02 ...
	I1101 10:16:20.355220  349506 cli_runner.go:164] Run: docker container inspect ha-369457-m02 --format={{.State.Status}}
	I1101 10:16:20.385379  349506 status.go:371] ha-369457-m02 host status = "Stopped" (err=<nil>)
	I1101 10:16:20.385417  349506 status.go:384] host is not running, skipping remaining checks
	I1101 10:16:20.385425  349506 status.go:176] ha-369457-m02 status: &{Name:ha-369457-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:16:20.385444  349506 status.go:174] checking status of ha-369457-m04 ...
	I1101 10:16:20.385753  349506 cli_runner.go:164] Run: docker container inspect ha-369457-m04 --format={{.State.Status}}
	I1101 10:16:20.402992  349506 status.go:371] ha-369457-m04 host status = "Stopped" (err=<nil>)
	I1101 10:16:20.403015  349506 status.go:384] host is not running, skipping remaining checks
	I1101 10:16:20.403042  349506 status.go:176] ha-369457-m04 status: &{Name:ha-369457-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (74.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1101 10:17:14.587762  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m13.366014443s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (74.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 node add --control-plane --alsologtostderr -v 5
E1101 10:17:42.288574  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 node add --control-plane --alsologtostderr -v 5: (1m21.59134676s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-369457 status --alsologtostderr -v 5: (1.05936107s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.094892646s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-827677 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1101 10:20:03.681078  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-827677 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m21.127117195s)
--- PASS: TestJSONOutput/start/Command (81.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-827677 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-827677 --output=json --user=testUser: (5.87979057s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-584329 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-584329 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.726456ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0896805b-033b-43cb-925f-f69741d71966","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-584329] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"acfecb95-07be-49e2-9232-6f4228c58463","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21832"}}
	{"specversion":"1.0","id":"5dc8dccc-d52a-4062-b262-b7e8f4a60e75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"20e01465-63ad-4613-a05b-7e7b8b9d65b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig"}}
	{"specversion":"1.0","id":"743ba886-ed11-45fc-9333-b5f7dc2e2296","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube"}}
	{"specversion":"1.0","id":"1438db34-d148-4b17-bb69-93d9a5b9c7a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"df794aff-7e43-4c37-8813-62ab374d2956","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cb5f779f-42f6-4989-9f7c-0eff6142bae0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-584329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-584329
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-569132 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-569132 --network=: (36.361105983s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-569132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-569132
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-569132: (2.310845549s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-554062 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-554062 --network=bridge: (35.330388621s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-554062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-554062
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-554062: (2.119868317s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.47s)

                                                
                                    
x
+
TestKicExistingNetwork (39.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1101 10:21:59.948618  294288 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 10:21:59.964071  294288 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 10:21:59.964983  294288 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1101 10:21:59.965031  294288 cli_runner.go:164] Run: docker network inspect existing-network
W1101 10:21:59.981612  294288 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1101 10:21:59.981644  294288 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1101 10:21:59.981665  294288 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1101 10:21:59.981766  294288 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 10:22:00.015683  294288 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5e2665991a3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:25:1a:f9:12:ec} reservation:<nil>}
I1101 10:22:00.016194  294288 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40002cff10}
I1101 10:22:00.016219  294288 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1101 10:22:00.016282  294288 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1101 10:22:00.319365  294288 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-406264 --network=existing-network
E1101 10:22:14.587041  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-406264 --network=existing-network: (37.333430031s)
helpers_test.go:175: Cleaning up "existing-network-406264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-406264
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-406264: (2.150165912s)
I1101 10:22:39.834748  294288 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (39.90s)

                                                
                                    
x
+
TestKicCustomSubnet (38.84s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-562960 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-562960 --subnet=192.168.60.0/24: (36.562025463s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-562960 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-562960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-562960
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-562960: (2.252798139s)
--- PASS: TestKicCustomSubnet (38.84s)

                                                
                                    
x
+
TestKicStaticIP (36.4s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-660040 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-660040 --static-ip=192.168.200.200: (33.997176176s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-660040 ip
helpers_test.go:175: Cleaning up "static-ip-660040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-660040
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-660040: (2.232582592s)
--- PASS: TestKicStaticIP (36.40s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (77.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-824405 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-824405 --driver=docker  --container-runtime=crio: (36.353132107s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-826940 --driver=docker  --container-runtime=crio
E1101 10:25:03.683707  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-826940 --driver=docker  --container-runtime=crio: (35.697218378s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-824405
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-826940
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-826940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-826940
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-826940: (2.169149055s)
helpers_test.go:175: Cleaning up "first-824405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-824405
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-824405: (2.081547177s)
--- PASS: TestMinikubeProfile (77.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-464998 --memory=3072 --mount-string /tmp/TestMountStartserial2176602890/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-464998 --memory=3072 --mount-string /tmp/TestMountStartserial2176602890/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.690315338s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-464998 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-467394 --memory=3072 --mount-string /tmp/TestMountStartserial2176602890/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-467394 --memory=3072 --mount-string /tmp/TestMountStartserial2176602890/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.084543811s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-467394 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-464998 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-464998 --alsologtostderr -v=5: (1.707449925s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-467394 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-467394
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-467394: (1.295765084s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-467394
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-467394: (7.747619077s)
--- PASS: TestMountStart/serial/RestartStopped (8.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-467394 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-203767 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1101 10:27:14.588040  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:28:06.747343  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-203767 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m19.587358643s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-203767 -- rollout status deployment/busybox: (3.338132559s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- exec busybox-7b57f96db7-5wssc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- exec busybox-7b57f96db7-jnxg8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- exec busybox-7b57f96db7-5wssc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- exec busybox-7b57f96db7-jnxg8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- exec busybox-7b57f96db7-5wssc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- exec busybox-7b57f96db7-jnxg8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- exec busybox-7b57f96db7-5wssc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- exec busybox-7b57f96db7-5wssc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- exec busybox-7b57f96db7-jnxg8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203767 -- exec busybox-7b57f96db7-jnxg8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (60.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-203767 -v=5 --alsologtostderr
E1101 10:28:37.650176  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-203767 -v=5 --alsologtostderr: (59.768017163s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (60.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-203767 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp testdata/cp-test.txt multinode-203767:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp multinode-203767:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile626383225/001/cp-test_multinode-203767.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp multinode-203767:/home/docker/cp-test.txt multinode-203767-m02:/home/docker/cp-test_multinode-203767_multinode-203767-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m02 "sudo cat /home/docker/cp-test_multinode-203767_multinode-203767-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp multinode-203767:/home/docker/cp-test.txt multinode-203767-m03:/home/docker/cp-test_multinode-203767_multinode-203767-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m03 "sudo cat /home/docker/cp-test_multinode-203767_multinode-203767-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp testdata/cp-test.txt multinode-203767-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp multinode-203767-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile626383225/001/cp-test_multinode-203767-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp multinode-203767-m02:/home/docker/cp-test.txt multinode-203767:/home/docker/cp-test_multinode-203767-m02_multinode-203767.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767 "sudo cat /home/docker/cp-test_multinode-203767-m02_multinode-203767.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp multinode-203767-m02:/home/docker/cp-test.txt multinode-203767-m03:/home/docker/cp-test_multinode-203767-m02_multinode-203767-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m03 "sudo cat /home/docker/cp-test_multinode-203767-m02_multinode-203767-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp testdata/cp-test.txt multinode-203767-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp multinode-203767-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile626383225/001/cp-test_multinode-203767-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp multinode-203767-m03:/home/docker/cp-test.txt multinode-203767:/home/docker/cp-test_multinode-203767-m03_multinode-203767.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767 "sudo cat /home/docker/cp-test_multinode-203767-m03_multinode-203767.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 cp multinode-203767-m03:/home/docker/cp-test.txt multinode-203767-m02:/home/docker/cp-test_multinode-203767-m03_multinode-203767-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 ssh -n multinode-203767-m02 "sudo cat /home/docker/cp-test_multinode-203767-m03_multinode-203767-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-203767 node stop m03: (1.337703495s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-203767 status: exit status 7 (542.2049ms)

                                                
                                                
-- stdout --
	multinode-203767
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-203767-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-203767-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-203767 status --alsologtostderr: exit status 7 (551.440802ms)

                                                
                                                
-- stdout --
	multinode-203767
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-203767-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-203767-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:29:27.648697  399891 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:29:27.648891  399891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:29:27.648905  399891 out.go:374] Setting ErrFile to fd 2...
	I1101 10:29:27.648911  399891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:29:27.649263  399891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:29:27.649493  399891 out.go:368] Setting JSON to false
	I1101 10:29:27.649537  399891 mustload.go:66] Loading cluster: multinode-203767
	I1101 10:29:27.649591  399891 notify.go:221] Checking for updates...
	I1101 10:29:27.650567  399891 config.go:182] Loaded profile config "multinode-203767": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:29:27.650587  399891 status.go:174] checking status of multinode-203767 ...
	I1101 10:29:27.651240  399891 cli_runner.go:164] Run: docker container inspect multinode-203767 --format={{.State.Status}}
	I1101 10:29:27.670559  399891 status.go:371] multinode-203767 host status = "Running" (err=<nil>)
	I1101 10:29:27.670585  399891 host.go:66] Checking if "multinode-203767" exists ...
	I1101 10:29:27.671002  399891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203767
	I1101 10:29:27.699743  399891 host.go:66] Checking if "multinode-203767" exists ...
	I1101 10:29:27.700046  399891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:29:27.700105  399891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203767
	I1101 10:29:27.719363  399891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/multinode-203767/id_rsa Username:docker}
	I1101 10:29:27.826478  399891 ssh_runner.go:195] Run: systemctl --version
	I1101 10:29:27.833103  399891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:29:27.846262  399891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:29:27.904747  399891 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:29:27.894450177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:29:27.905424  399891 kubeconfig.go:125] found "multinode-203767" server: "https://192.168.67.2:8443"
	I1101 10:29:27.905462  399891 api_server.go:166] Checking apiserver status ...
	I1101 10:29:27.905516  399891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:29:27.917948  399891 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup
	I1101 10:29:27.926519  399891 api_server.go:182] apiserver freezer: "8:freezer:/docker/f8b6531cdb934d0043fcbaf77521bc789db08e7a8c4c16d3fa32a67e7984320f/crio/crio-5985ca082cb9d55e9861d81f3dcb157ebead02bef9831bd172d1efcb2dfceac7"
	I1101 10:29:27.926592  399891 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f8b6531cdb934d0043fcbaf77521bc789db08e7a8c4c16d3fa32a67e7984320f/crio/crio-5985ca082cb9d55e9861d81f3dcb157ebead02bef9831bd172d1efcb2dfceac7/freezer.state
	I1101 10:29:27.934342  399891 api_server.go:204] freezer state: "THAWED"
	I1101 10:29:27.934379  399891 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 10:29:27.942604  399891 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 10:29:27.942632  399891 status.go:463] multinode-203767 apiserver status = Running (err=<nil>)
	I1101 10:29:27.942644  399891 status.go:176] multinode-203767 status: &{Name:multinode-203767 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:29:27.942660  399891 status.go:174] checking status of multinode-203767-m02 ...
	I1101 10:29:27.942971  399891 cli_runner.go:164] Run: docker container inspect multinode-203767-m02 --format={{.State.Status}}
	I1101 10:29:27.960446  399891 status.go:371] multinode-203767-m02 host status = "Running" (err=<nil>)
	I1101 10:29:27.960475  399891 host.go:66] Checking if "multinode-203767-m02" exists ...
	I1101 10:29:27.960791  399891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203767-m02
	I1101 10:29:27.978926  399891 host.go:66] Checking if "multinode-203767-m02" exists ...
	I1101 10:29:27.979393  399891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:29:27.979460  399891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203767-m02
	I1101 10:29:27.997471  399891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21832-292445/.minikube/machines/multinode-203767-m02/id_rsa Username:docker}
	I1101 10:29:28.107547  399891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:29:28.121142  399891 status.go:176] multinode-203767-m02 status: &{Name:multinode-203767-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:29:28.121184  399891 status.go:174] checking status of multinode-203767-m03 ...
	I1101 10:29:28.121488  399891 cli_runner.go:164] Run: docker container inspect multinode-203767-m03 --format={{.State.Status}}
	I1101 10:29:28.138553  399891 status.go:371] multinode-203767-m03 host status = "Stopped" (err=<nil>)
	I1101 10:29:28.138576  399891 status.go:384] host is not running, skipping remaining checks
	I1101 10:29:28.138582  399891 status.go:176] multinode-203767-m03 status: &{Name:multinode-203767-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-203767 node start m03 -v=5 --alsologtostderr: (7.421570072s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-203767
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-203767
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-203767: (25.199518578s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-203767 --wait=true -v=5 --alsologtostderr
E1101 10:30:03.677221  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-203767 --wait=true -v=5 --alsologtostderr: (51.20392041s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-203767
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-203767 node delete m03: (5.002568312s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-203767 stop: (24.312471051s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-203767 status: exit status 7 (101.050068ms)

                                                
                                                
-- stdout --
	multinode-203767
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-203767-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-203767 status --alsologtostderr: exit status 7 (85.848951ms)

                                                
                                                
-- stdout --
	multinode-203767
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-203767-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:31:23.073337  407679 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:31:23.073447  407679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:31:23.073458  407679 out.go:374] Setting ErrFile to fd 2...
	I1101 10:31:23.073463  407679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:31:23.073710  407679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:31:23.073907  407679 out.go:368] Setting JSON to false
	I1101 10:31:23.073949  407679 mustload.go:66] Loading cluster: multinode-203767
	I1101 10:31:23.074010  407679 notify.go:221] Checking for updates...
	I1101 10:31:23.075321  407679 config.go:182] Loaded profile config "multinode-203767": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:31:23.075348  407679 status.go:174] checking status of multinode-203767 ...
	I1101 10:31:23.076097  407679 cli_runner.go:164] Run: docker container inspect multinode-203767 --format={{.State.Status}}
	I1101 10:31:23.094712  407679 status.go:371] multinode-203767 host status = "Stopped" (err=<nil>)
	I1101 10:31:23.094737  407679 status.go:384] host is not running, skipping remaining checks
	I1101 10:31:23.094744  407679 status.go:176] multinode-203767 status: &{Name:multinode-203767 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:31:23.094773  407679 status.go:174] checking status of multinode-203767-m02 ...
	I1101 10:31:23.095075  407679 cli_runner.go:164] Run: docker container inspect multinode-203767-m02 --format={{.State.Status}}
	I1101 10:31:23.114573  407679 status.go:371] multinode-203767-m02 host status = "Stopped" (err=<nil>)
	I1101 10:31:23.114597  407679 status.go:384] host is not running, skipping remaining checks
	I1101 10:31:23.114604  407679 status.go:176] multinode-203767-m02 status: &{Name:multinode-203767-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-203767 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-203767 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.175194464s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203767 status --alsologtostderr
E1101 10:32:14.587358  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.87s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-203767
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-203767-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-203767-m02 --driver=docker  --container-runtime=crio: exit status 14 (109.327622ms)

                                                
                                                
-- stdout --
	* [multinode-203767-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-203767-m02' is duplicated with machine name 'multinode-203767-m02' in profile 'multinode-203767'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-203767-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-203767-m03 --driver=docker  --container-runtime=crio: (34.864105604s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-203767
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-203767: exit status 80 (383.592955ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-203767 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-203767-m03 already exists in multinode-203767-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-203767-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-203767-m03: (2.119211705s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.58s)

                                                
                                    
x
+
TestPreload (128.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-972600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-972600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.81792915s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-972600 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-972600 image pull gcr.io/k8s-minikube/busybox: (2.321624925s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-972600
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-972600: (5.895656133s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-972600 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-972600 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.858161736s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-972600 image list
helpers_test.go:175: Cleaning up "test-preload-972600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-972600
E1101 10:35:03.677030  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-972600: (2.480383059s)
--- PASS: TestPreload (128.62s)

                                                
                                    
x
+
TestScheduledStopUnix (113.56s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-835541 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-835541 --memory=3072 --driver=docker  --container-runtime=crio: (36.384000951s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-835541 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-835541 -n scheduled-stop-835541
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-835541 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 10:35:42.481393  294288 retry.go:31] will retry after 75.803µs: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.482591  294288 retry.go:31] will retry after 112.825µs: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.483734  294288 retry.go:31] will retry after 305.132µs: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.484887  294288 retry.go:31] will retry after 223.241µs: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.486074  294288 retry.go:31] will retry after 466.951µs: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.487270  294288 retry.go:31] will retry after 604.963µs: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.488411  294288 retry.go:31] will retry after 1.484223ms: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.490603  294288 retry.go:31] will retry after 1.701987ms: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.492810  294288 retry.go:31] will retry after 2.345624ms: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.496028  294288 retry.go:31] will retry after 3.86738ms: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.500215  294288 retry.go:31] will retry after 6.263864ms: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.507478  294288 retry.go:31] will retry after 9.112225ms: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.517732  294288 retry.go:31] will retry after 13.007565ms: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.532095  294288 retry.go:31] will retry after 14.753454ms: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
I1101 10:35:42.547273  294288 retry.go:31] will retry after 31.546287ms: open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/scheduled-stop-835541/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-835541 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-835541 -n scheduled-stop-835541
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-835541
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-835541 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-835541
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-835541: exit status 7 (76.443436ms)

                                                
                                                
-- stdout --
	scheduled-stop-835541
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-835541 -n scheduled-stop-835541
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-835541 -n scheduled-stop-835541: exit status 7 (71.485617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-835541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-835541
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-835541: (5.510937516s)
--- PASS: TestScheduledStopUnix (113.56s)

                                                
                                    
x
+
TestInsufficientStorage (13.62s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-154201 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-154201 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.02566464s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fea6989c-7baf-4551-abf1-f924e1beff44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-154201] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b3e82fe-8b5f-4936-af01-a4bab58f4795","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21832"}}
	{"specversion":"1.0","id":"91574b4b-921f-442b-8760-9c4a7721dac9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f4e66d5a-cec5-4e9a-bb9c-1b0518da0341","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig"}}
	{"specversion":"1.0","id":"75ec91c3-3dd6-4b9d-bdbc-badf95c4b4da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube"}}
	{"specversion":"1.0","id":"73aff9d2-3edf-4644-aa42-46d42f3ffb81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9083dba0-9242-4471-9776-b87af0657a73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ee8e6c0e-a668-4688-9048-05dd54cc60fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"044f2372-1473-4142-a0b5-fe8178851923","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"57b1d68f-9fd7-48ff-a9ed-94255cdd7674","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c20e2a5c-f7f6-45df-9f2a-f0e6a642756d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"198cd6fb-0c14-4432-ae43-998968432b9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-154201\" primary control-plane node in \"insufficient-storage-154201\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1d88612-fe64-44ce-bf20-c0452cd66981","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"55e947dd-f2c9-4448-baa4-ef6705cf0cbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa965cfe-1b0b-4f99-884e-191f19d7563a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-154201 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-154201 --output=json --layout=cluster: exit status 7 (319.319842ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-154201","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-154201","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 10:37:10.415949  424097 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-154201" does not appear in /home/jenkins/minikube-integration/21832-292445/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-154201 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-154201 --output=json --layout=cluster: exit status 7 (301.096148ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-154201","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-154201","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 10:37:10.717863  424164 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-154201" does not appear in /home/jenkins/minikube-integration/21832-292445/kubeconfig
	E1101 10:37:10.727733  424164 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/insufficient-storage-154201/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-154201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-154201
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-154201: (1.968813588s)
--- PASS: TestInsufficientStorage (13.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1328087147 start -p running-upgrade-700635 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1328087147 start -p running-upgrade-700635 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.249990438s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-700635 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-700635 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.071887104s)
helpers_test.go:175: Cleaning up "running-upgrade-700635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-700635
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-700635: (2.028892052s)
--- PASS: TestRunningBinaryUpgrade (54.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (367.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.144329516s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-946953
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-946953: (1.656761742s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-946953 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-946953 status --format={{.Host}}: exit status 7 (140.748695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m41.344404571s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-946953 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (146.656885ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-946953] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-946953
	    minikube start -p kubernetes-upgrade-946953 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9469532 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-946953 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-946953 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.986907623s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-946953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-946953
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-946953: (2.486006092s)
--- PASS: TestKubernetesUpgrade (367.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (122.89s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3447831737 start -p missing-upgrade-941524 --memory=3072 --driver=docker  --container-runtime=crio
E1101 10:37:14.587543  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3447831737 start -p missing-upgrade-941524 --memory=3072 --driver=docker  --container-runtime=crio: (1m3.689071762s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-941524
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-941524
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-941524 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-941524 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.501284081s)
helpers_test.go:175: Cleaning up "missing-upgrade-941524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-941524
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-941524: (2.307254582s)
--- PASS: TestMissingContainerUpgrade (122.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-276658 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-276658 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (104.467745ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-276658] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (51.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-276658 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-276658 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (51.271741413s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-276658 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (51.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-276658 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-276658 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.470068917s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-276658 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-276658 status -o json: exit status 2 (394.715731ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-276658","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-276658
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-276658: (2.187719239s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-276658 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-276658 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.232308177s)
--- PASS: TestNoKubernetes/serial/Start (10.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-276658 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-276658 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.925077ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-276658
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-276658: (1.306060572s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-276658 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-276658 --driver=docker  --container-runtime=crio: (7.96874895s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-276658 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-276658 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.458116ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (60.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3365852602 start -p stopped-upgrade-124684 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3365852602 start -p stopped-upgrade-124684 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.852318026s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3365852602 -p stopped-upgrade-124684 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3365852602 -p stopped-upgrade-124684 stop: (1.223357048s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-124684 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1101 10:40:03.676879  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-124684 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.826778635s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (60.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-124684
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-124684: (1.200668095s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestPause/serial/Start (81.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-524446 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1101 10:42:14.587883  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-524446 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.799153792s)
--- PASS: TestPause/serial/Start (81.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.93s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-524446 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-524446 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.906927398s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-883951 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-883951 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (312.513679ms)

                                                
                                                
-- stdout --
	* [false-883951] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:43:53.188583  461016 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:53.193498  461016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:53.193536  461016 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:53.193559  461016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:53.193888  461016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-292445/.minikube/bin
	I1101 10:43:53.194381  461016 out.go:368] Setting JSON to false
	I1101 10:43:53.195417  461016 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8785,"bootTime":1761985048,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1101 10:43:53.195517  461016 start.go:143] virtualization:  
	I1101 10:43:53.199374  461016 out.go:179] * [false-883951] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:43:53.202370  461016 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:43:53.202579  461016 notify.go:221] Checking for updates...
	I1101 10:43:53.208380  461016 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:43:53.211248  461016 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-292445/kubeconfig
	I1101 10:43:53.214104  461016 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-292445/.minikube
	I1101 10:43:53.217358  461016 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:43:53.220150  461016 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:43:53.223421  461016 config.go:182] Loaded profile config "kubernetes-upgrade-946953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:53.223530  461016 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:43:53.270012  461016 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:43:53.270130  461016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:53.387505  461016 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:43:53.377407629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:43:53.387607  461016 docker.go:319] overlay module found
	I1101 10:43:53.390695  461016 out.go:179] * Using the docker driver based on user configuration
	I1101 10:43:53.393380  461016 start.go:309] selected driver: docker
	I1101 10:43:53.393394  461016 start.go:930] validating driver "docker" against <nil>
	I1101 10:43:53.393408  461016 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:43:53.396871  461016 out.go:203] 
	W1101 10:43:53.399560  461016 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 10:43:53.402369  461016 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-883951 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-883951" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-883951" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:39:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-946953
contexts:
- context:
cluster: kubernetes-upgrade-946953
user: kubernetes-upgrade-946953
name: kubernetes-upgrade-946953
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-946953
user:
client-certificate: /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kubernetes-upgrade-946953/client.crt
client-key: /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kubernetes-upgrade-946953/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-883951

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-883951"

                                                
                                                
----------------------- debugLogs end: false-883951 [took: 5.282920373s] --------------------------------
helpers_test.go:175: Cleaning up "false-883951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-883951
--- PASS: TestNetworkPlugins/group/false (5.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.310634642s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-245622 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [752fc038-610b-4c69-a258-06116d49c5d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [752fc038-610b-4c69-a258-06116d49c5d3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00629349s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-245622 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-245622 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-245622 --alsologtostderr -v=3: (12.022718054s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-245622 -n old-k8s-version-245622
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-245622 -n old-k8s-version-245622: exit status 7 (72.167403ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-245622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1101 10:47:14.587239  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-245622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.613225437s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-245622 -n old-k8s-version-245622
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dwp8b" [587849a0-79dc-4cc6-93f8-5c57c64fc5f2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.008343075s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dwp8b" [587849a0-79dc-4cc6-93f8-5c57c64fc5f2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003684439s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-245622 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-245622 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.835835406s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m21.564411173s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-014050 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [98a75dc0-f396-4705-a6f4-5d99adc472af] Pending
helpers_test.go:352: "busybox" [98a75dc0-f396-4705-a6f4-5d99adc472af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [98a75dc0-f396-4705-a6f4-5d99adc472af] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003071471s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-014050 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-014050 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-014050 --alsologtostderr -v=3: (12.047293573s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050: exit status 7 (76.084356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-014050 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:50:03.677272  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/addons-714840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-014050 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.624096134s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-014050 -n default-k8s-diff-port-014050
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-499088 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d07dd95a-7eea-459b-8c02-1476a2c71627] Pending
helpers_test.go:352: "busybox" [d07dd95a-7eea-459b-8c02-1476a2c71627] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d07dd95a-7eea-459b-8c02-1476a2c71627] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003472663s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-499088 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-499088 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-499088 --alsologtostderr -v=3: (12.042650545s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-499088 -n embed-certs-499088
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-499088 -n embed-certs-499088: exit status 7 (71.019396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-499088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-499088 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.333568769s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-499088 -n embed-certs-499088
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fj5c6" [1a59c4d2-6c8a-4e52-8dd0-0fe55b16e5a8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004579682s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fj5c6" [1a59c4d2-6c8a-4e52-8dd0-0fe55b16e5a8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003335485s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-014050 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-014050 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m11.176259745s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tgcrm" [4f4c8f6c-873f-4d2b-9488-d12c3adae611] Running
E1101 10:51:31.096507  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:31.102876  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:31.114241  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:31.135592  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:31.177052  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:31.258451  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:31.420094  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:31.741690  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003218285s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tgcrm" [4f4c8f6c-873f-4d2b-9488-d12c3adae611] Running
E1101 10:51:32.383491  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:33.665053  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:36.226659  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002914349s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-499088 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-499088 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:51:51.589904  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:52:12.071905  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:52:14.587475  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.771203402s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-548708 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a013fa5d-50ef-4b04-996a-c6fd9681d728] Pending
helpers_test.go:352: "busybox" [a013fa5d-50ef-4b04-996a-c6fd9681d728] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a013fa5d-50ef-4b04-996a-c6fd9681d728] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004258107s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-548708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-548708 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-548708 --alsologtostderr -v=3: (12.373887557s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-196911 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-196911 --alsologtostderr -v=3: (1.338709995s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-196911 -n newest-cni-196911
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-196911 -n newest-cni-196911: exit status 7 (70.207109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-196911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-196911 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (19.368143569s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-196911 -n newest-cni-196911
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-548708 -n no-preload-548708
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-548708 -n no-preload-548708: exit status 7 (102.368343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-548708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (64.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:52:53.033970  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-548708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m3.926471278s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-548708 -n no-preload-548708
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (64.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-196911 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m27.639958237s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l9drd" [f49db6cb-7ce0-44ea-87ba-6431b1d80dea] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00376607s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l9drd" [f49db6cb-7ce0-44ea-87ba-6431b1d80dea] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004192739s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-548708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-548708 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1101 10:54:14.955747  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:28.585974  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:28.592366  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:28.603811  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:28.625195  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:28.666589  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:28.748021  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:28.910231  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:29.231542  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:29.873289  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:31.154581  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:33.716892  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:54:38.838946  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m19.466897227s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-883951 "pgrep -a kubelet"
I1101 10:54:39.335109  294288 config.go:182] Loaded profile config "auto-883951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-883951 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sdkjx" [50129f86-b781-474c-b6c6-b1a9578dd795] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sdkjx" [50129f86-b781-474c-b6c6-b1a9578dd795] Running
E1101 10:54:49.080559  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004438181s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-883951 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m7.776400428s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-tdjpx" [21cb7dc5-dc08-411c-bd7e-c3cb710f3a66] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006318997s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-883951 "pgrep -a kubelet"
I1101 10:55:37.389505  294288 config.go:182] Loaded profile config "kindnet-883951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-883951 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f8fhx" [e6437eba-01e1-46f9-b154-dc2fedb9115a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f8fhx" [e6437eba-01e1-46f9-b154-dc2fedb9115a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004033665s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-883951 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m6.167480903s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-p6qbd" [698a3e8e-a0c9-448d-a1b1-f92d76a1ff7e] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003903811s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-883951 "pgrep -a kubelet"
I1101 10:56:27.686575  294288 config.go:182] Loaded profile config "calico-883951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-883951 replace --force -f testdata/netcat-deployment.yaml
I1101 10:56:28.084903  294288 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qpcxd" [89b9c458-f729-41b7-9b42-e780b99b11df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:56:31.097085  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/old-k8s-version-245622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qpcxd" [89b9c458-f729-41b7-9b42-e780b99b11df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004551131s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-883951 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1101 10:57:12.446341  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/default-k8s-diff-port-014050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:14.587476  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/functional-839033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m26.296182862s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-883951 "pgrep -a kubelet"
I1101 10:57:21.424623  294288 config.go:182] Loaded profile config "custom-flannel-883951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-883951 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kr2p4" [a3cf5680-fb7c-45fc-9b29-c608ed575973] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:57:22.345076  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:22.351788  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:22.363505  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:22.385445  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:22.427388  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:22.509516  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:22.671399  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:22.994611  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:23.637107  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:24.919762  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:27.481986  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-kr2p4" [a3cf5680-fb7c-45fc-9b29-c608ed575973] Running
E1101 10:57:32.603928  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003972464s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-883951 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1101 10:58:03.326714  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.962677949s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-883951 "pgrep -a kubelet"
I1101 10:58:32.991267  294288 config.go:182] Loaded profile config "enable-default-cni-883951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-883951 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-94ftl" [16f5b626-1231-4d9f-8f14-26ef7f28f8df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-94ftl" [16f5b626-1231-4d9f-8f14-26ef7f28f8df] Running
E1101 10:58:44.288978  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/no-preload-548708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003299438s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-883951 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-wcmvk" [c6662012-1684-45d1-b70b-30ef43165e08] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003541006s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-883951 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
I1101 10:59:07.135610  294288 config.go:182] Loaded profile config "flannel-883951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-883951 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m23.596073688s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-883951 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sbktr" [e3aa03c7-bac7-4be6-9dd4-844d9a6b0614] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sbktr" [e3aa03c7-bac7-4be6-9dd4-844d9a6b0614] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004958262s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-883951 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-883951 "pgrep -a kubelet"
I1101 11:00:30.957965  294288 config.go:182] Loaded profile config "bridge-883951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-883951 replace --force -f testdata/netcat-deployment.yaml
E1101 11:00:30.974979  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kindnet-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:30.981315  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kindnet-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:30.993177  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kindnet-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:31.014603  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kindnet-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:31.055970  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kindnet-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:31.137619  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kindnet-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-92n24" [d86ca83b-fc7a-4df8-980f-89e33d047749] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 11:00:31.302697  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kindnet-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:31.624698  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kindnet-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:32.267095  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kindnet-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:00:33.549393  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kindnet-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-92n24" [d86ca83b-fc7a-4df8-980f-89e33d047749] Running
E1101 11:00:36.111589  294288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kindnet-883951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004500797s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-883951 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-883951 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.71s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-896540 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-896540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-896540
--- SKIP: TestDownloadOnlyKic (0.71s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-514829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-514829
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-883951 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-883951" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-883951" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:39:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-946953
contexts:
- context:
cluster: kubernetes-upgrade-946953
user: kubernetes-upgrade-946953
name: kubernetes-upgrade-946953
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-946953
user:
client-certificate: /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kubernetes-upgrade-946953/client.crt
client-key: /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kubernetes-upgrade-946953/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-883951

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-883951"

                                                
                                                
----------------------- debugLogs end: kubenet-883951 [took: 4.518421334s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-883951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-883951
--- SKIP: TestNetworkPlugins/group/kubenet (4.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-883951 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-883951" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21832-292445/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:39:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-946953
contexts:
- context:
cluster: kubernetes-upgrade-946953
user: kubernetes-upgrade-946953
name: kubernetes-upgrade-946953
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-946953
user:
client-certificate: /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kubernetes-upgrade-946953/client.crt
client-key: /home/jenkins/minikube-integration/21832-292445/.minikube/profiles/kubernetes-upgrade-946953/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-883951

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-883951" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-883951"

                                                
                                                
----------------------- debugLogs end: cilium-883951 [took: 6.015301956s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-883951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-883951
--- SKIP: TestNetworkPlugins/group/cilium (6.21s)

                                                
                                    
Copied to clipboard